US20120290152A1 - Collaborative Engagement for Target Identification and Tracking - Google Patents

Collaborative Engagement for Target Identification and Tracking Download PDF

Info

Publication number
US20120290152A1
US20120290152A1 US13/546,787 US201213546787A US2012290152A1 US 20120290152 A1 US20120290152 A1 US 20120290152A1 US 201213546787 A US201213546787 A US 201213546787A US 2012290152 A1 US2012290152 A1 US 2012290152A1
Authority
US
United States
Prior art keywords
unmanned
target
predetermined target
vehicle
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/546,787
Inventor
Carol Carlin Cheung
Brian Masao Yamauchi
Christopher Vernon Jones
Mark Bourne Moseley
Sanjiv Singh
Christopher Michael Geyer
Benjamin Peter Grocholsky
Earl Clyde Cox
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Teledyne Flir Detection Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US13/546,787 priority Critical patent/US20120290152A1/en
Publication of US20120290152A1 publication Critical patent/US20120290152A1/en
Assigned to ENDEAVOR ROBOTICS, INC. reassignment ENDEAVOR ROBOTICS, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: IROBOT DEFENSE HOLDINGS, INC.
Assigned to FLIR DETECTION, INC. reassignment FLIR DETECTION, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ENDEAVOR ROBOTICS, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0011Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement
    • G05D1/0038Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement by providing the operator with simple or augmented images from one or more cameras located onboard the vehicle, e.g. tele-operation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/003Transmission of data between radar, sonar or lidar systems and remote stations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0088Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0094Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/027Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0272Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Definitions

  • the present teachings relate to collaborative engagement of unmanned vehicles to identify, detect, and track a target.
  • the present teachings relate, more particularly, to collaboratively utilizing unmanned air and ground vehicles to identify, detect, and track a target in a variety of urban and non-urban environments.
  • the present teachings provide a method for controlling unmanned vehicles to maintain line-of-sight between a predetermined target and at least one unmanned vehicle.
  • the method comprises: providing an unmanned air vehicle including sensors configured to locate a target and an unmanned ground vehicle including sensors configured to locate and track the target; communicating and exchanging data to and among the unmanned ground vehicles; controlling the unmanned air vehicle and the unmanned ground vehicle to maintain line-of-sight between a predetermined target and at least one of the unmanned air vehicles; geolocating the predetermined target with the unmanned air vehicle using information regarding a position of the unmanned air vehicle and information regarding a position of the target relative to the unmanned air vehicle; and transmitting information defining the geolocation of the predetermined target to the unmanned ground vehicle so that the unmanned ground vehicle can perform path planning based on the geolocation.
  • the present teachings also provide a collaborative engagement system comprising: at least one unmanned air vehicle including sensors configured to locate a target and at least one unmanned ground vehicle including sensors configured to locate and track a target; and a controller facilitating control of, and communication and exchange of data to and among the unmanned vehicles, the controller facilitating data exchanged via a common protocol.
  • the collaborative engagement system controls the unmanned vehicles to maintain line-of-sight between a predetermined target and at least one of the unmanned vehicles, geolocating the predetermined target with the unmanned air vehicle and transmitting information defining the geolocation of the predetermined target to the unmanned ground vehicle so that the unmanned ground vehicle can perform path planning based on the geolocation.
  • FIG. 1 illustrates an exemplary visibility map
  • FIG. 2 illustrates exemplary visibility codes from a variety of directions for the location illustrated in the visibility map of FIG. 1 .
  • FIG. 3 illustrates an exemplary visibility likelihood map for a uniform distribution of targets, cumulative of the directional visibility illustrated in FIG. 2 .
  • FIG. 4 illustrates an exemplary visibility likelihood map for a non-uniform distribution of targets.
  • FIG. 5 illustrates an exemplary visibility likelihood map generated when a non-uniform distribution of possible target positions is known.
  • FIG. 6 illustrates an exemplary grey-scale visibility map showing a likelihood that a target (dot at left side) can be viewed from any direction.
  • FIG. 7 illustrates an exemplary navigation cost map based on the values of FIG. 6 .
  • FIG. 8 illustrates an exemplary UGV for use in a system in accordance with the present teachings.
  • FIG. 9 illustrates an exemplary UAV for use in a system in accordance with the present teachings.
  • FIG. 10 illustrates exemplary functional blocks that can be utilized to plan mission execution.
  • FIG. 11 illustrates an exemplary Search Area Mission Task Component.
  • FIG. 12 illustrates an exemplary Pursue Target Mission Task Component.
  • FIG. 13 illustrates an exemplary Geolocate Target Mission Task Component.
  • FIG. 14 illustrates an exemplary Collaborate Path task.
  • FIG. 15 illustrates an exemplary network fusion by propagating inter-node differences.
  • FIG. 16 illustrates functional blocks required to implement fusion.
  • FIG. 17 illustrates an exemplary embodiment of an overall system for collaborative unmanned vehicle target detection and tracking.
  • FIG. 18 illustrates an exemplary embodiment of a decentralized fusion node for an unmanned vehicle agent.
  • FIG. 19 illustrates an exemplary embodiment of a Supervisor OCU interface.
  • more than one unmanned vehicle (including one or more UAVs and/or UGVs) is utilized, collaboratively, to search for, detect, track, and identify a target.
  • the unmanned vehicles collaborate to best ensure that at least one unmanned covers the target while the sights of the other vehicle(s) are blocked by, for example, and urban obstruction such as a building.
  • the present teachings contemplate giving unmanned vehicles the intelligence to decide which positions will maximize potential sight lines, to predict (in certain embodiments of the present teachings with operator assistance and guidance) where a target will go, and to allow teams of vehicles to collaborate in achieving full coverage of a target.
  • An exemplary embodiment of an overall system for collaborative unmanned vehicle target detection and tracking is illustrated in FIG. 17 .
  • a Supervisor operator control unit communicates with and controls at least one UGV and at least one UAV via, for example radio frequency (RF) and/other Ethernet.
  • RF radio frequency
  • a ground control station for the UAV is used in addition to the Supervisor OCU for UAV communication and control.
  • This exemplary hardware architecture integrates at least one UGV and at least one UAV with a single common controller, the Supervisor OCU. Communication to and among the unmanned vehicles enables the desired collaboration.
  • the required information to be shared includes the target's state estimate and its covariance matrix, as well as the uncertainty of the target's state.
  • the target data gets fused as described below regarding DDF architecture. Collaboration occurs when the fused target estimate is updated among the unmanned vehicles.
  • the Supervisor OCU can provide the fused track estimate to a system operator.
  • the Raven communication and control hardware can comprise a hand controller, a hub unit, an RF unit, and an antenna(e) post.
  • the GCS hub unit can process and convert the message, telemetry, and hand controls to Cursor-on-Target (CoT) messages to be received by the UAV platform.
  • the GCS hub and the illustrated FreeWave radio can interface with the Supervisor OCU via an Ethernet hub for computationally intensive tasks.
  • the present teachings contemplate developing a system allowing a team of unmanned vehicles to search urban terrain for an elusive human dismount target or non-human target, track the target even if it attempts to avoid detection, and pursue and engage the target on command from an operator.
  • the present teachings are implemented on a PackBot as the UGV and an AeroVironment Raven or AirRobot quad-rotor platform as a UAV.
  • a PackBot as the UGV
  • an AeroVironment Raven or AirRobot quad-rotor platform as a UAV.
  • UGVs and UAVs may be utilized collaboratively in accordance with the present teachings.
  • Certain embodiments of the present teachings contemplate integrating existing or developing visual tracking algorithms (such as, for example, those being developed by the Air Force Research Laboratory (AFRL)) with existing situational awareness frameworks (such as, for example, the AFRL Layered Sensing model), which can be augmented by human assistance from an operator (using, for example, and operator control unit such as that provided for an iRobot PackBot) in the area of, for example, identifying the most likely targets.
  • AFRL Air Force Research Laboratory
  • existing situational awareness frameworks such as, for example, the AFRL Layered Sensing model
  • identified targets can be provided to the unmanned vehicle teams in terms of global positioning system (GPS) coordinates.
  • GPS global positioning system
  • the present teachings further contemplate utilizing, for example, an a priori digital terrain elevation data (DTED) map of the urban terrain, from which target paths can be predicted (in some embodiments with operator assistance), and motion of the unmanned vehicles can be planned to maximize probability of keeping a target in view despite the presence of occluding obstacles.
  • Certain embodiments of the present teachings provide such tracking and predicting a location of a target in the presence of occlusions (such as those that exist in urban environments) using certain predefined algorithms, and integration of those algorithms with semi-autonomous or autonomous behaviors such as navigation and obstacle avoidance behaviors suitable for real-world urban terrain.
  • the present teachings provide a UGV that is equipped with an orientation sensor such as a GPS or INS/GPS system (such as, for example, and Athena Micro Guidestar INS/GPS or a MicroStrain 3DM-GX1 orientation sensor) for navigation based on both GPS and INS, including navigation in occluded spaces such as urban canyons that may intermittently block GPS signals.
  • the UGV can be equipped with a payload such as a Navigator Payload (which can include, for example, a stereo vision system, GPS, LIDAR (e.g., SICK LIDAR) integrated with GPS, an IMU, a gyro, a radio and a dedicated processor (for example running iRobot's proprietary Aware 2.0 software architecture)).
  • a Navigator Payload which can include, for example, a stereo vision system, GPS, LIDAR (e.g., SICK LIDAR) integrated with GPS, an IMU, a gyro, a radio and a dedicated processor (for example
  • the Navigator payload can provide, for example, on-board integrated obstacle avoidance and waypoint following behaviors through complex terrain.
  • the UGV can additionally be equipped with a camera (e.g., a Sony zoom camera) on a pan/tilt (e.g., a TRACIabs Biclops pan/tilt) mount to keep a target in view from the ground.
  • a camera e.g., a Sony zoom camera
  • a pan/tilt e.g., a TRACIabs Biclops pan/tilt
  • the present teachings provide a UAV and UGV team that can track and potentially engage a human or non-human target.
  • a single operator can control one or more unmanned vehicles to perform the operations necessary to search for, track, monitor, and/or destroy selected targets.
  • This functionality can be implemented in accordance with the present teachings by utilizing a Layered Sensing shared situational awareness system that can determine the location of targets using combined machine perception and human feedback.
  • the Layered Sensing system has been defined (by AFRL) as a providing “military and homeland security decision makers at all levels with timely, actionable, trusted, and relevant information necessary for situational awareness to ensure their decisions achieve the desired military/humanitarian effects.
  • Layered Sensing is characterized by the appropriate sensor or combination of sensors/platforms, infrastructure and exploitation capabilities to generate that situational awareness and directly support delivery of “tailored effects.”
  • the Layered Sensing system can direct an unmanned vehicle team to investigate a target and determine an optimal path to fly to view the target. It can also return views of the target from the air and the ground for operator (and other personnel) review.
  • an a priori map and based on terrain data such as DTED terrain data, it can predict the target's location or assist an operator in predicting the targets location and, based on such prediction, determine an optimal path to fly to view the target.
  • one or more of the unmanned vehicles in the team can utilize predictive algorithms in accordance with the present teachings to fly a search pattern to attempt to find the target. If the target is spotted by a team member, that team member—using its own GPS coordinates to determine GPS coordinates of the target—can send the target location to other team members.
  • the UAV has mounted thereon one or more cameras that can, for example, be mounted in gimbals (e.g., a Cloud Cap Technology TASE gimbal) for optimal range of motion. If more than one camera is used, one camera can face forward and one camera can face to the side to keep the target in view.
  • the cameras allow the UAV to keep the target in view.
  • Another team member such as an unmanned ground vehicle (UGV) can then navigate autonomously (or semi-autonomously with operator assistance) to the target location using, for example, GPS, INS, compass, and odometry for localization and LIDAR for obstacle avoidance.
  • the LIDAR obstacle sensing can be integrated with terrain data from maps of from another source such as a team member.
  • a path planning algorithm such as A* or a Rapidly-exploring Random Tree (RRT) can be utilized to plan a path to the target based on an a priori map.
  • An RRT is a data structure and algorithm, widely used in robot path planning, designed for efficiently searching non-convex, high-dimensional search spaces. Simply put, the tree is constructed in such a way that any sample in the space is added by connecting it to the closest sample already in the tree.
  • the team member When a team member arrives in a proximity of the target, the team member can use its camera to attain a close-up view of the target. Then, as the target moves, the unmanned vehicle team is controlled to best maintain a view of the target despite occluding obstacle, using a combination of the target prediction algorithms and local navigation behaviors such as obstacle avoidance.
  • occlusion planning and minimization can be accomplished as follows:
  • the system attempts to evaluate or predict where the target is likely to be within a short time horizon (e.g., one to two minutes) by computing a distribution p t (x) that gives a probability that the target is at x at time t.
  • a short time horizon e.g., one to two minutes
  • the distribution can be represented and updated efficiently using particle filters, which is an extension of a Kalman-type filter to multi-modal distributions.
  • the system can then attempt to predict where unmanned vehicle team members can be positioned to best “see” a target.
  • This computation can be based on a pre-computed visibility map and a distribution of where the target is likely to be.
  • a GPU such as Quantum3D's COTS GPU for real-time computation.
  • a GPU is a dedicated graphics rendering device that is very efficient at manipulating and displaying computer graphics. Its highly parallel structure makes it more effective than general-purpose CPUs for a range of complex algorithms.
  • the visibility map is computed ahead of time, so that at every position P y
  • the GPU can be used to accumulate the polygons in a buffer to generate a visibility map, an example of which is illustrated in FIG. 1 and discussed in more detail below.
  • the illustrated polygons are equally spaced on a grid, and each polygon represents the visibility at its center point. A full circle means that the area is unoccluded (in this map by the illustrated buildings). Otherwise, the polygon is effectively a radial plot representing the elevation angles from which the point is visible. Therefore, the smaller the polygon, the less its area is visible from the air.
  • Visibility codes are then generated for the area illustrated in FIG. 1 . These visibility codes can be illustrated, for example, in eight different directions for which visibility can be evaluated. The visibility codes are illustrated in FIG. 2 for the eight different directions. Shades of gray determine the level of occlusion in the illustrated direction, darker areas representing more occlusion. Thus, darker areas are less visible from the given direction; i.e., the minimum elevation angle at which the sky is visible is higher the darker the area is.
  • FIG. 3 represents a map that is cumulative of the directional visibility illustrated in FIG. 2 .
  • FIG. 3 's visibility likelihood map assumes that the target could be anywhere in the map (i.e., assuming a uniform distribution of the position of the dismount on the ground). Lighter areas are positions from which one is less likely to see the target. Notice that the center area is lighter because of the greater number of occlusions caused by the buildings.
  • FIG. 5 illustrates a visibility likelihood map generated when a non-uniform distribution of possible target positions is known, as illustrated in FIG. 4 .
  • the area where the target is likely to be visible from can be understandable much more concentrated than in the map of FIG. 3 .
  • the system next calculates a path for one or more unmanned vehicle team members that minimizes loss of the target. Paths are generated by the RRT and evaluated by the system and/or one or more operators to determine a path that minimizes a given criteria (e.g., the amount of time a target is lost). In certain embodiments of the present teachings, the system chooses a path y(t) that maximizes ⁇ p t [y(t)]dt, where the integral is performed over the time horizon.
  • the present teachings provide the capability to evaluate a number of paths and choose a path or accept instructions from an operator regarding path choice. Once the path is selected, one or more unmanned vehicle team members are directed in accordance with those paths to execute autonomous navigation.
  • the present teachings can combine RRTs to represent possible trajectories of the unmanned vehicles and Monte Carlo methods to represent the uncertainty about where the target is.
  • Possible target trajectories are constructed over a finite time horizon and, during RRT traversal, the system tracks how many times it has seen a particle for each RRT node. This is because it can be disadvantageous to continue following a particle that has already been seen, and so a cost function can discount particles that have been seen more than once.
  • This method can generate one or more paths that sweep out and attempt to consume the probability mass of where the target may be.
  • Monte Carlo methods with which those skilled in the art are familiar, are a class of computational algorithms that rely on repeated random sampling to compute their results.
  • Monte Carlo methods are often used when simulating physical and mathematical systems, and when it is infeasible or impossible to compute an exact result.
  • the present teachings contemplate, for example, evaluating more than 64,000 trajectories, and at each of the 128,000 RRT nodes, evaluating the visibility of particles, all at a rate of 1 Hz.
  • various embodiments of the present teachings perform visibility computations using DTED data (e.g., Level 4 data or higher ( 1/9 th or 1/27 th arc second spacing)) to create a map representing the visibility at each location.
  • DTED data e.g., Level 4 data or higher ( 1/9 th or 1/27 th arc second spacing)
  • Performing these computations on a GPU allows rapid map generation and real-time calculation of visibility by rendering polygons representing visibility (see FIG. 1 ) and obtaining a map (e.g., a color-coded map) therefrom whose values (colors) tell the system and/or an operator how likely an unmanned vehicle team member, and particularly a UAV, is to be able to view a target from a given point.
  • a map e.g., a color-coded map
  • the different colors utilized in the map can represent the direction from which the target is visible.
  • a red-colored area on the map can represent an area from which a target is visible to the east.
  • Light blue on the other hand, can indicate an area from which the target is visible to the west.
  • Brighter color can, for example, indicate an area where an unmanned vehicle is more likely to see a target (from the color-indicated direction).
  • a mixture of colors can be used to indicate more than one direction from which the target may be visible.
  • Such a map can be calculated for either a concentrated (there is some idea where the target is) or uniform (target could be anywhere) distribution of target position.
  • the system e.g., the GPU
  • the system can nevertheless compute a best location by accumulating polygons over all possible positions.
  • the system e.g., the GPU
  • the system can compute visibility maps several times per second.
  • a grey-scale visibility map can be generated and utilized, such as that illustrated in FIG. 6 , which shows a likelihood that a target (dot at left side) can be viewed from any direction. A marginal probability of viewing the target from any direction from a given point is shown. Lighter indicates an increased chance of viewing the target. Thus, white areas correspond to viewpoints from which the target is likely to be viewed, and dark areas correspond to viewpoints from which the target is unlikely to be viewed. Such a map can be used for computing an A* or RRT path.
  • FIG. 7 illustrates a navigation cost map based on the values of FIG. 6 , with equal cost contours.
  • the visibility map is used as a cost map for computing an unmanned vehicle surveillance path, such as an A* path.
  • the grayscale background of FIG. 7 shows the cost to come from any place on the map to the peak of the cost function. Contours of equal cost emanate from the starting position (the red dot in FIG. 6 ).
  • White (lighter) areas correspond to a lower total travel cost (or higher likelihood of detection) and darker areas correspond to a higher total travel cost (lower likelihood of detection).
  • the two paths traverse to the peak of the cost function, visiting the most likely areas for finding the target.
  • the planner could decide that, because of the large high probability area in the lower right corner where the target is likely to be seen, the unmanned vehicle should follow this path through the high probability area instead of taking the upper path, although the upper path may be shorter.
  • Thermal vision target tracking can be accomplished, for example, by equipping one or more of the unmanned vehicle team members (e.g., a UGV) with a thermal infrared camera.
  • the thermal infrared camera can comprise, for example, an FLIR Photon thermal imager.
  • Thermal imaging is particularly useful for tracking human targets when the ambient temperature is less about 90 degrees.
  • an effective imaging range for a thermal imager can be extended to about 30 meters.
  • Tracking software can apply thresholding to the thermal image to eliminate isolated pixels to filter noise.
  • the centroid of the remaining points can then be used to determine a bearing to the target within the image plane.
  • a following behavior can turn the UGV to face the target based on a horizontal coordinate of the centroid, and can maintain a desired distance from the target based on a vertical coordinate of the centroid (i.e., if the target is higher (father) in the image than desired, the UGV moves forward, and if the target is lower (nearer) in the image than desired, the UGV halts or moves backward. In this way, the UGV follows the target while maintaining a desired separation.
  • Certain embodiments of the present teachings can additionally or alternatively utilize thermal imaging with a UAV.
  • control architecture comprised the following three primary parts: (1) a fully integrated architecture fusing the U.S. Army's Armament Research, Development and Engineering Center multi-platform controller (ARDEC MPC) architecture, a Mission Planner with collaborative engagement capabilities, and local Decentralized Data Fusion nodes on the unmanned vehicles; (2) a populated Mission Planner with target engagement-specific Mission Task Components, associated agents and defined interface(s) to integrate with the MPC architecture; and (3) a functional architecture decomposition of specific Mission Task Components to clarify how high level tasks are executed at the low level by the respective unmanned platforms.
  • ARDEC MPC U.S. Army's Armament Research, Development and Engineering Center multi-platform controller
  • the present teachings contemplate many or all of the following functions being performed by the operator and/or members of the unmanned vehicle team.
  • the operator and members of the unmanned vehicle team are referred to as mission agents.
  • simultaneous control of at least one UAV and at least one UGV is achieved from a single operator control unit (OCU), which can be achieved using waypoint navigation for both the UAV and UGV.
  • OCU operator control unit
  • the system can provide integration of waypoint control.
  • waypoint paths generated by the Supervisor OCU can be translated to appropriate UAV waypoint paths.
  • Software tools can be employed for task allocation to support coordinated search, pursuit, and tracking of a target with unmanned vehicles.
  • the overall system in accordance with an exemplary embodiment of the present teachings comprises an iRobot PackBot UGV with, for example, a Navigator payload and sensor suite.
  • the PackBot and its Navigator Payload sensor suite can operate using the Aware 2.0 robot control architecture.
  • the PackBot as illustrated in FIG. 8 , is equipped with two main treads used for locomotion, and two articulated flippers having treads that are used to climb over obstacles.
  • a PackBot can typically travel at sustained speeds of up to 4.5 mph.
  • a PackBot's electronics are typically enclosed in a compact, hardened enclosure, and can comprise a 700 MHz mobile Pentium III with 256 MB SDRAM, a 300 MB compact flash memory storage device, and a 2.4 GHz 802.11b radio Ethernet.
  • the system can also comprise an AeroVironment Raven UAV that is back-packable and hand-launchable.
  • a Raven is illustrated in FIG. 9 .
  • the Raven typically has a 90-minute flight duration and features EO/IR payloads and GPS.
  • the Raven can be operated manually or programmed for autonomous operation using, for example, a laptop mission planner for processing and the Raven's advanced avionics and precise GPS navigation.
  • the Raven has a wingspan of 4.5 feet and can weigh just over 4 lbs. It can be assembled in less than 30 seconds and supports aerial surveillance up to 10 km in line-of-sight range. The raven can travel at speeds of up to 50 knots. It can be equipped with forward-looking and side-looking camera ports.
  • FIG. 9 illustrates an architecture supporting an integrated system in accordance with various embodiments of the present teachings.
  • the OCU Supervisor includes a Mission Planner with a collaborative engagement (CE) node.
  • CDAS combat Decision Aid Software
  • CDAS/C2 nodes bottom left
  • SA Situational Awareness
  • the Mission Planner CE node is the central node that manages the overall unmanned system and decomposes the CDAS/C2 high level mission commands to appropriate unmanned system agents.
  • the Mission Planner CE node functions are described in more detail hereinbelow.
  • CDAS is a high-level mission planning, decision support tool providing simultaneous situational awareness, data sharing, and mission analysis for multiple combat units.
  • CDAS provides libraries, functions, and capabilities that minimize redundant efforts and conflicting capabilities or efforts, and can assist in providing relevant, timely, and critical information to the operator.
  • a CDAS CoT component can be utilized to translate Aware 2.0 interface calls from the Supervisor OCU to Cot messages that are sent to CDAS, and to receive CoT messages from TCP and/or UDP and translate them to Aware 2.0 events/interface calls.
  • the Mission Planner conducts discrete management of tasks and assigns those tasks to the unmanned vehicles while the Decentralized Data Fusion (DDF) nodes manage, in a distributed fashion, low-level continuous execution of the tasks and coordinate shared data and discrete maneuvers. DDF function is described in detail hereinbelow.
  • the illustrated architecture allows for the Mission Planner to handle contingency operations as they arise and respond to them by updating tasks to the team agents while the DDF nodes support tight collaboration and coordinated maneuvers to pursue and geo-locate the target.
  • the Mission Planner CE node can be separate from the OCUs from a functional and interface perspective.
  • the software modules can be designed to be plug and play. Therefore, the Mission Planner module can have interfaces allowing it to be located in the OCU Supervisor or separated onto another piece of hardware.
  • the Mission Planner node and the OCUs for both UAV(s) and UGV(s) are envisioned to be located in the same hardware unit, referred to herein as the “OCU Supervisor.”
  • the architecture design can allow a single operator to monitor and control the mission through the OCU Supervisor.
  • the collaborative software system can be quickly responsive to mission changes and replanning, while also reducing the complexity in the number of components and their respective interfaces. This is facilitated by the UAV and UGV systems supporting waypoint navigation.
  • the OCU Supervisor can display both video and telemetry data of each unmanned vehicle to the operator. It can also allow the operator to manually control each unmanned vehicle.
  • the OCU Supervisor includes the hardware typically used to manually operate the UGV, a separate hand controller can be utilized for manual control of the UAV.
  • the exemplary architecture illustrated in FIG. 9 includes two UGVs, one UAV, one UGV OCU and one UAV OCU. The number of unmanned vehicles and OCUs may vary in accordance with the present teachings.
  • Tactical UAVs are typically designed for optimal endurance and hence minimized for weight. As a result, computing on the UAV platform is typically minimal. Most of the required collaborative DDF processing and coordinated navigation software will therefore be located on the UAV OCU, rather than on the UAV platform itself. On the other hand, tactical UGVs are typically not as constrained for weight and endurance and have significantly higher on-board processing capacity. In such a case, most all of the required collaborative DDF processing can occur on the UGV platform.
  • the exemplary architecture illustrated in FIG. 9 supports not only individual and coordinated control of the UAV and UGVs, but it also supports the UAV to act as a data relay.
  • Joint Architecture for Unmanned System (JAUS) messages sent to the UAV can be passed through to the UGV for processing.
  • JAUS Unmanned System
  • a UAV data relay can significantly extend the control range of the UGV by at least an order of magnitude.
  • the Mission Planner specifies the high-level mission to be executed, and the architecture in FIG. 10 illustrates the functional blocks that can be utilized to plan mission execution.
  • the definition of the illustrated architecture is based on a defined mission planner framework that can be a modified version of the Overseer SBIR project, which can provide localization, path planning, waypoint navigation, an object avoidance for at least unmanned ground vehicles.
  • This architecture can manage resources of the mission to optimize execution from a centralized planner at the system level.
  • Mission Task Components are tasks over which the Mission Planner has purview and are assigned to agents through a decision step in the Task Allocation Module.
  • the illustrated mission thread contains four agents: an operator; a UAV; and two UGVs.
  • the capabilities and status of the operator and unmanned vehicles are recorded and continually updated in the Agent Capabilities Database.
  • the Agent Capabilities Database stored such information and can provide appropriate weighting to the agent's ability to perform a given task which will impact the Task Allocation decision.
  • MTC tasks are intended to manage the highest level of tasks for executing the collaborative engagement mission. These high-level tasks can be executed by individual agents or a combination of agents. In all cases, specific software modules will support each high-level MTC. As illustrated in FIG. 10 , the primary MTCs to conduct a collaborative target engagement can be:
  • the Task Allocation Module manages the execution of the collaborative engagement mission and assigns MTCs to appropriate agents given their capabilities.
  • the Task Allocation Module can also allocate a sequence of multiple MTC tasks, as long as the assigned agent's capabilities support those tasks.
  • the DDF algorithms which can include a state machine on each agent, can support sequential execution of tasks with gating criteria to execute subsequent tasks.
  • the Task Allocation Module can provide data to the MPC SA server, which can then provide information to the ARDEC architecture nodes as described above. This allows feedback to the ARDEC system for monitoring, situational awareness, and display.
  • Mission Planner architecture provides a high-level view of the management of the overall collaborative engagement mission, functional state diagrams and a description of each MTC are provided below regarding software module design.
  • the set of functions to accomplish a mission encompass the execution of simultaneous tasks as well as sequential tasks. While some tasks are executed independently, other tasks require collaboration with other unmanned vehicle agents. Tasks requiring collaboration among the unmanned vehicle agents are highlighted.
  • the Manage Agent Resources MTC and the Manage Communications MTC have common aspects relevant to the management of the overall system, independent of the specific mission to be executed.
  • the functional architecture is primarily defined by the Mission Planner.
  • the remaining three MTCs are specific for performing a target engagement mission and can therefore be more complex.
  • the illustrated functional flow block architectures for these tasks define required functions among the unmanned vehicles and supervisory operator.
  • a Search Area MTC embodiment illustrated in FIG. 11 begins with selection of an area of interest from the supervisory operator.
  • either one type of unmanned vehicle or both types of unmanned vehicles can be assigned by the Mission Planner to search the area.
  • the upper block specifies the Search Area MTC functions to be performed by a UAV, and the lower block specifies the Search Area MTC functions to be performed by a UGV.
  • One exception is that the UGV will more often encounter unanticipated obstacles.
  • live conditions may include additional obstacles such as road traffic, crowds, rubble piles, etc., which the UGV will have to circumnavigate.
  • circumnavigation function is represented by a non-solid line.
  • the follow-on task for circumnavigation is a collaborative task, Collaborate Path. This task has a bolded border to indicate that it has a separate functional block architecture, described below, which involves other agents aiding the UGV to navigate and re-route its path.
  • the supervisory operator will monitor the unmanned agents' actions as they maneuver through their search patterns. The supervisor can, at any time, input waypoints to update the search pattern for any of the unmanned vehicles.
  • a Pursue Target MTC embodiment is illustrated in FIG. 12 and has a layout that is similar to the Search Area MTC. Initially, either (1) the target location is known by intelligence and the operator manually provides the target coordinates, or (2) the target is detected by the operator when viewing available image data and the operator selects the target to pursue.
  • each assigned unmanned vehicle UAV functions are depicted in the upper box and UGV functions are depicted in the lower box
  • UAV functions are depicted in the upper box
  • UGV functions are depicted in the lower box
  • the fused DDF track can be generated by all available sensor measurements and intelligence on the target.
  • a Geolocate Target MTC embodiment is illustrated in FIG. 13 and has the highest number of tasks requiring collaboration and, therefore, the highest number of DDF software modules.
  • the task of target selection is executed by the supervisory operator, denoted by the human icon.
  • Target detection can occur in a different MTC, such as Pursue Target, but this function is addressed here for completeness in the event that other MTCs were not executed beforehand.
  • the Mission Planner can assign available unmanned vehicles to geolocate a target if the target of interest is in the unmanned vehicle's camera view. If the Mission Planner designates a UAV to execute this MTC, then the sequence of tasks in the upper box is followed. The UGV sequence of tasks for geolocating a target are set forth in the lower box. Once the target of interest is specified in an image, the UGV can maintain track on the image in the 2D camera coordinate frame using, for example, Hough transforms, hysteresis and time-averaged correlation.
  • the UGV comes to a stop to eliminate noise before its on-board laser ranger or other functionality is able to accurately measure the range to the target.
  • This range measurement is correlated with angle measurements from the image to estimate the target's position.
  • a transformation to geocoordinates is calculated, and the target's track state can be either initialized or updated with this estimate.
  • the UGV can then transmit information to the other DDF nodes, including to the operator's Supervisor OCU for displaying the target's updated track state.
  • a fusion step can occur across all DDF nodes and the updated and integrated DDF fused track state can update the UGV's local track.
  • the UGV can then rely on this updated fused track for directing the camera's pointing angle, for example via a pan/tilt mechanism, to ensure camera coverage of the target. If necessary, the UGV can navigate and pursue the target to maintain target ranging and observations. If the UGV, while in pursuit of the target, arrives at an obstacle that its obstacle detection/obstacle avoidance (ODOA) algorithm is unable to circumnavigate, the UGV can initiate the Collaborate Path DDF task to elicit aid from neighboring unmanned vehicle agents.
  • ODOA obstacle detection/obstacle avoidance
  • the Collaborate Path task can be instantiated when a UGV automatically determines that it cannot execute a planned path due to an unanticipated blockage that it cannot circumnavigate.
  • the UGV transmits a coded message to other DDF node agents seeking assistance.
  • the other DDF nodes are able to determine which agent is best positioned to provide aid.
  • This assisting agent can either be a UAV or UGV, which maneuvers toward the agent needing assistance. Assistance can include, for example, providing additional information regarding the size and location of the blockage, as well as alternative navigation routes.
  • the present teachings contemplate a variety of techniques for detecting obstacles in the UGV's path.
  • imagery can be used by the operator to select obstacles that the blocked UGV should take into account.
  • more sophisticated sensors and obstacle discrimination algorithms can automatically detect and recognize obstacles and provide blockage information, including geo-coordinates of the blockage, the type of blockage, the size of the blockage, etc.
  • operator-selected obstacles from the image data can be converted to geo-coordinates.
  • the geo-coordinates allow the obstructed UGV to recalculate its path plan. If the UGV is unable to reach a viable path plan solution, it can transmit a correction message to an assisting agent which can then continue maneuvers to provide additional blockage information. If the obstructed UGV is able to navigate with the revised path plan, it can transmit a message to the assisting agent indicating that it has successfully determined a revised route or cleared the obstruction.
  • the system architecture embodiment described herein provides the Mission Planner CE node at a high level—at the local nodes—the unmanned vehicle agents may take on low-level tasks in a decentralized fashion.
  • the DDF nodes support autonomous collaboration for targeting, and can provide significant performance for target localization while keeping processing and bandwidth utilization at easily manageable levels.
  • a decentralized data fusion network consists of a network of sensing nodes, each with its own processing facility, which do not require any central fusion or central communication facility.
  • the sensing nodes are all components containing DDF nodes, which include the OCUs and the unmanned vehicle platforms. In such a network, fusion occurs locally at each node on the basis of local observations and the information communicated from neighboring nodes.
  • a decentralized data fusion network is characterized by three constraints:
  • the constraints imposed provide a number of important characteristics for decentralized data fusion systems. Eliminating a central node and any common communication facility ensures that the system is scalable as there are no limits imposed by centralized computational bottlenecks or lack of communication bandwidth. Ensuring that no node is central and that no global knowledge of the network topology can allow fusion results in the system to survive the loss or addition of sensing nodes.
  • the constraints also make the system highly resilient to dynamic changes in network structure. Because all fusion processes must take place locally at each sensor site through a common interface and no global knowledge of the network is required, nodes can be constructed and programmed in a modular reconfigurable fashion. Decentralized network are typically characterized as being modular, scalable, and survivable.
  • the DDF fusion architecture implements decentralized Bayesian estimation to fuse information between DDF nodes.
  • Decentralized estimation schemes are derived by reformulating conventional estimators such as Kalman filters in Information or log-likelihood form.
  • the fusion operation reduces to summation of its information sources.
  • this summation can be performed in an efficient decentralized manner by passing inter-node state information differences. This concept is shown in FIG. 15 , which illustrates network fusion by propagating inter-node differences.
  • the functional blocks required to implement this fusion process consist of sensor pre-processing, local state estimation, and inter-node DDF communication management.
  • an additional control block is appropriate to direct sensing resources.
  • Each of the blocks illustrated in FIG. 16 is implemented as one or more software components that can communicate through standard network and inter-process protocols.
  • the result is a highly flexible and reconfigurable system architecture.
  • Component modules can be located and connected in a customizable manner that delivers the most appropriate system configuration. Examples include small expendable UAVs with limited computing power.
  • a DDF structure can connect processed sensor and actuation signals wirelessly to a remote processor for processing, estimation, and control.
  • the DDF network integrates multiple estimates from multiple vehicles in a way that is simple, efficient, and decentralized.
  • a decentralized fusion node for an unmanned vehicle agent is illustrated in FIG. 18 .
  • Each node maintains a local estimate for the state of the target vehicle, which can include the target vehicle's position, its velocity, and other identifying information.
  • a DDF Communication Manager can follow a simple rule: at every time step, each node communicates both updates to the local estimate state as well as uncertainty to its neighbors. These changes propagate through the network to inform all nodes on the connected network using peer-to-peer communication.
  • each node Upon establishing a connection, each node performs an additional operation to determine estimate information shared in common with the new neighbor node. Exchanges in the local node's estimates are aggregated without double counting.
  • the operator utilizes the Supervisor OCU to manually detect one or more targets in received video data.
  • the operator is relied on for target detection due to the large variety of adversary types that might need to be detected, as well as the broad range of backgrounds from which targets need to be identified.
  • the low-level task of tracking the target can be automated with software.
  • Target tracking in EO and IR imagery, from one or more UAVs and/or one or more UGVs can utilize an algorithm that maintains an adaptive classifier separating the target from its background.
  • the classifier decides which pixels belong to target, and which pixels belong to the background and is updated iteratively using a window around the target's current location. If the system is in danger of losing the target, either due to a potential for occlusion by known buildings or because the target becomes harder to distinguish from the background or other targets, the system can alert the operator that assistance is required. The goal is to minimize the amount of operator assistance necessary.
  • Approximate geolocation from UGVs can be estimated from heading and position information, as well as estimated pointing information from Pan-Tilt-Zoom cameras. Due to a dependence on the attitude of the vehicle, geolocation from UAV video can be more difficult without certain inertial systems or gimbaled cameras.
  • geolocation for UAVs can be implemented by matching frames from UAV video to previously acquired aerial imagery, such as from recent satellite imagery. For a given area, a library of feature descriptors (e.g., large visible landmarks) is constructed. For each received image, feature detection is performed, the library is queried, and a location on the ground best matching the query image is chosen.
  • Failure detection, image stabilization, and improvements to operator target track initialization can improve target tracking performance for the unmanned vehicle agents. If target tracking is initialized by the operator with an over-sized image region, the tracker may confuse target characteristics with image background characteristics, leading to track loss. An under-sized image region may cause the tracker to reject or fail to incorporate certain target characteristics, which could result in tracking failure. Properly sizing of the tracker initialization region can be achieved in a variety of ways, including by operator training. In certain embodiments, and particularly for UAV tracking, utilizing both motion-based tracking and color-based tracking can improve overall tracking success for the system.
  • a DDF Estimation System uses measurements from ground and aerial agents to localize the target and then disseminates the target location information to be acted upon by the system's Collaborative Path planning systems.
  • the operator begins by designating where to look for targets, for example by drawing on a map displayed on the Supervisor OCU.
  • the unmanned vehicle agents can then converge on the area, and the operator may, for example, choose to detect a target on the UAV video.
  • the UAV DDF node's Automatic Target Tracking could then take over and track the target's position in the video.
  • the Mission Planner can then initiate pursuit by unmanned vehicle agents (e.g., one or more UGVs) using the estimated position. Once in pursuit or when the target is in view, the one or more UGVs can provide their own estimates of the target's position. When these estimates become available, an ad-hoc network can be formed among the nodes, and DDF can take over aggregating the estimates into a single minimum variance estimate.
  • unmanned vehicle agents e.g., one or more UGVs
  • the Supervisor OCU interface facilitates the operator's management, command and control, and monitoring of mission execution.
  • the Supervisor OCU display interface provides the operator with an intuitive understanding of mission status and expected execution of future agent actions.
  • Certain mixed initiative approaches such as dynamically accepting different levels and frequencies of intervention, self-recognition of needing assistance, and sharing of decision-making at specific levels, can assist the operator in managing a multi-unmanned vehicle mission.
  • the Supervisor OCU interface can facilitate operator waypoint input for the unmanned vehicles to redirect their routes, selecting a specific unmanned ground vehicle to teleoperate, “freezing” UGVs, and putting UAVs in a holding pattern.
  • the illustrated interface allows the use of drag strokes to control aspects of the unmanned vehicles.
  • the mapping of click-drag strokes in specific areas of the display interface can facilitate controls of different vehicles, injection of waypoints, camera controls, and head-neck controls.
  • the icons below the map view in the upper left allow the operator to inject waypoints simply by selecting a vehicle and then placing waypoints directly onto the map.
  • the Supervisor OCU interface facilitates operator injection of high-level mission goals through interaction with the Mission Planner CE in the upper left section of the display.
  • This interface can allow the operator to draw a polygon on a street map designating the area to be searched.
  • This interface can also allow the operator to cue targets in the video streams emanating from the unmanned vehicle agents. Once the target has been specified, the vehicles will track the target autonomously or semi-autonomously.
  • the interface can also integrate directives from the operator that keep the vehicle from going into certain areas. For example, if the operator sees an area that is blocked, the area can be marked as a NO-GO region by, for example, drawing on the map. Path planning can then automatically reroute any plans that might have required navigation through those areas.
  • icons representing available unmanned vehicle agents can be utilized in the map (upper left corner of display) to indicate the appropriate location of the represented unmanned vehicle agent on the map.
  • updates and track history can be properly registered to each unmanned vehicle agent.
  • one or more UAVs can be directed by the system to orbit the building containing the target and determine if and when the target exits the building. Additional UGVs may be patrolling the perimeter of the building on the ground. If and when the target exits the building, an orbiting UAV that discovers the exit can inform other agents of the exit. The UGV that followed the target into the building can then exit the building, attempt to obtain line-of-sight to the target, and again follow the target. While this is occurring, other unmanned vehicle team members collaborate to maintain line-of-sight with the exited target. Alternatively, another UGV could obtain line-of-sight to the target and begin following the target, in which case the system may or may not instruct the original UGV to also find and follow the target, depending on mission parameters and/or operator decision making.

Abstract

A method for controlling unmanned vehicles to maintain line-of-sight between a predetermined target and at least one unmanned vehicle. The method comprises: providing an unmanned air vehicle including sensors configured to locate a target and an unmanned ground vehicle including sensors configured to locate and track the target; communicating and exchanging data to and among the unmanned ground vehicles; controlling the unmanned air vehicle and the unmanned ground vehicle to maintain line-of-sight between a predetermined target and at least one of the unmanned air vehicles; geolocating the predetermined target with the unmanned air vehicle using information regarding a position of the unmanned air vehicle and information regarding a position of the target relative to the unmanned air vehicle; and transmitting information defining the geolocation of the predetermined target to the unmanned ground vehicle so that the unmanned ground vehicle can perform path planning based on the geolocation.

Description

  • This is a continuation of U.S. patent application Ser. No. 12/405,207, filed Mar. 16, 2009, titled Collaborative Engagement for Target Identification and Tracking, which claimed priority to U.S. Provisional Patent Application No. 61/036,988, filed Mar. 16, 2008, the entire disclosure of which is incorporated herein by reference in its entirety.
  • INTRODUCTION
  • The present teachings relate to collaborative engagement of unmanned vehicles to identify, detect, and track a target. The present teachings relate, more particularly, to collaboratively utilizing unmanned air and ground vehicles to identify, detect, and track a target in a variety of urban and non-urban environments.
  • BACKGROUND
  • There exists a need to search for, detect, track, and identify human and non-human targets, particularly in urban settings where targets can use their setting, e.g., buildings, narrow alleyways, and/or blending with civilians, to escape or decrease chances of being discovered. In an urban environment, it may not be enough to command an unmanned air vehicle (UAV) to fly over a target and assume that the target will be seen. It may be necessary for the UAV, in an urban environment, to fly at low altitudes and parallel to an alleyway rather than perpendicular to it; or to make an orbit that avoids a tall building. A large risk during urban surveillance is losing a target due to occlusion by buildings. Even with this increased intelligence, the UAV may not be unable to adequately search for, detect, track, and identify a target.
  • SUMMARY
  • The present teachings provide a method for controlling unmanned vehicles to maintain line-of-sight between a predetermined target and at least one unmanned vehicle. The method comprises: providing an unmanned air vehicle including sensors configured to locate a target and an unmanned ground vehicle including sensors configured to locate and track the target; communicating and exchanging data to and among the unmanned ground vehicles; controlling the unmanned air vehicle and the unmanned ground vehicle to maintain line-of-sight between a predetermined target and at least one of the unmanned air vehicles; geolocating the predetermined target with the unmanned air vehicle using information regarding a position of the unmanned air vehicle and information regarding a position of the target relative to the unmanned air vehicle; and transmitting information defining the geolocation of the predetermined target to the unmanned ground vehicle so that the unmanned ground vehicle can perform path planning based on the geolocation.
  • The present teachings also provide a collaborative engagement system comprising: at least one unmanned air vehicle including sensors configured to locate a target and at least one unmanned ground vehicle including sensors configured to locate and track a target; and a controller facilitating control of, and communication and exchange of data to and among the unmanned vehicles, the controller facilitating data exchanged via a common protocol. The collaborative engagement system controls the unmanned vehicles to maintain line-of-sight between a predetermined target and at least one of the unmanned vehicles, geolocating the predetermined target with the unmanned air vehicle and transmitting information defining the geolocation of the predetermined target to the unmanned ground vehicle so that the unmanned ground vehicle can perform path planning based on the geolocation.
  • Additional objects and advantages of the present teachings will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the present teachings. Such objects and advantages may be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
  • It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present teachings or claims.
  • The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments and, together with the description, serve to explain certain principles of the present teachings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an exemplary visibility map.
  • FIG. 2 illustrates exemplary visibility codes from a variety of directions for the location illustrated in the visibility map of FIG. 1.
  • FIG. 3 illustrates an exemplary visibility likelihood map for a uniform distribution of targets, cumulative of the directional visibility illustrated in FIG. 2.
  • FIG. 4 illustrates an exemplary visibility likelihood map for a non-uniform distribution of targets.
  • FIG. 5 illustrates an exemplary visibility likelihood map generated when a non-uniform distribution of possible target positions is known.
  • FIG. 6 illustrates an exemplary grey-scale visibility map showing a likelihood that a target (dot at left side) can be viewed from any direction.
  • FIG. 7 illustrates an exemplary navigation cost map based on the values of FIG. 6.
  • FIG. 8 illustrates an exemplary UGV for use in a system in accordance with the present teachings.
  • FIG. 9 illustrates an exemplary UAV for use in a system in accordance with the present teachings.
  • FIG. 10 illustrates exemplary functional blocks that can be utilized to plan mission execution.
  • FIG. 11 illustrates an exemplary Search Area Mission Task Component.
  • FIG. 12 illustrates an exemplary Pursue Target Mission Task Component.
  • FIG. 13 illustrates an exemplary Geolocate Target Mission Task Component.
  • FIG. 14 illustrates an exemplary Collaborate Path task.
  • FIG. 15 illustrates an exemplary network fusion by propagating inter-node differences.
  • FIG. 16 illustrates functional blocks required to implement fusion.
  • FIG. 17 illustrates an exemplary embodiment of an overall system for collaborative unmanned vehicle target detection and tracking.
  • FIG. 18 illustrates an exemplary embodiment of a decentralized fusion node for an unmanned vehicle agent.
  • FIG. 19 illustrates an exemplary embodiment of a Supervisor OCU interface.
  • DESCRIPTION OF THE EMBODIMENTS
  • Reference will now be made in detail to exemplary embodiments of the present teachings, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
  • In accordance with the present teachings, more than one unmanned vehicle (including one or more UAVs and/or UGVs) is utilized, collaboratively, to search for, detect, track, and identify a target. The unmanned vehicles collaborate to best ensure that at least one unmanned covers the target while the sights of the other vehicle(s) are blocked by, for example, and urban obstruction such as a building. The present teachings contemplate giving unmanned vehicles the intelligence to decide which positions will maximize potential sight lines, to predict (in certain embodiments of the present teachings with operator assistance and guidance) where a target will go, and to allow teams of vehicles to collaborate in achieving full coverage of a target. An exemplary embodiment of an overall system for collaborative unmanned vehicle target detection and tracking is illustrated in FIG. 17. As shown, a Supervisor operator control unit (OCU) communicates with and controls at least one UGV and at least one UAV via, for example radio frequency (RF) and/other Ethernet. In the illustrated embodiment, a ground control station for the UAV is used in addition to the Supervisor OCU for UAV communication and control. This exemplary hardware architecture integrates at least one UGV and at least one UAV with a single common controller, the Supervisor OCU. Communication to and among the unmanned vehicles enables the desired collaboration. As an example, for collaborative target tracking, the required information to be shared includes the target's state estimate and its covariance matrix, as well as the uncertainty of the target's state. As this data is shared through the network, the target data gets fused as described below regarding DDF architecture. Collaboration occurs when the fused target estimate is updated among the unmanned vehicles. The Supervisor OCU can provide the fused track estimate to a system operator.
  • In embodiments employing a Raven as the UAV, the Raven communication and control hardware, commonly referred to as its Ground Control Station (GCS), can comprise a hand controller, a hub unit, an RF unit, and an antenna(e) post. The GCS hub unit can process and convert the message, telemetry, and hand controls to Cursor-on-Target (CoT) messages to be received by the UAV platform. The GCS hub and the illustrated FreeWave radio can interface with the Supervisor OCU via an Ethernet hub for computationally intensive tasks.
  • The present teachings contemplate developing a system allowing a team of unmanned vehicles to search urban terrain for an elusive human dismount target or non-human target, track the target even if it attempts to avoid detection, and pursue and engage the target on command from an operator.
  • In certain embodiments as described hereinbelow in more detail, the present teachings are implemented on a PackBot as the UGV and an AeroVironment Raven or AirRobot quad-rotor platform as a UAV. However, one skilled in the art will appreciate that a variety of known UGVs and UAVs may be utilized collaboratively in accordance with the present teachings.
  • Certain embodiments of the present teachings contemplate integrating existing or developing visual tracking algorithms (such as, for example, those being developed by the Air Force Research Laboratory (AFRL)) with existing situational awareness frameworks (such as, for example, the AFRL Layered Sensing model), which can be augmented by human assistance from an operator (using, for example, and operator control unit such as that provided for an iRobot PackBot) in the area of, for example, identifying the most likely targets. In accordance with certain embodiments, identified targets can be provided to the unmanned vehicle teams in terms of global positioning system (GPS) coordinates.
  • The present teachings further contemplate utilizing, for example, an a priori digital terrain elevation data (DTED) map of the urban terrain, from which target paths can be predicted (in some embodiments with operator assistance), and motion of the unmanned vehicles can be planned to maximize probability of keeping a target in view despite the presence of occluding obstacles. Certain embodiments of the present teachings provide such tracking and predicting a location of a target in the presence of occlusions (such as those that exist in urban environments) using certain predefined algorithms, and integration of those algorithms with semi-autonomous or autonomous behaviors such as navigation and obstacle avoidance behaviors suitable for real-world urban terrain.
  • In certain embodiments, the present teachings provide a UGV that is equipped with an orientation sensor such as a GPS or INS/GPS system (such as, for example, and Athena Micro Guidestar INS/GPS or a MicroStrain 3DM-GX1 orientation sensor) for navigation based on both GPS and INS, including navigation in occluded spaces such as urban canyons that may intermittently block GPS signals. The UGV can be equipped with a payload such as a Navigator Payload (which can include, for example, a stereo vision system, GPS, LIDAR (e.g., SICK LIDAR) integrated with GPS, an IMU, a gyro, a radio and a dedicated processor (for example running iRobot's proprietary Aware 2.0 software architecture)). The Navigator payload can provide, for example, on-board integrated obstacle avoidance and waypoint following behaviors through complex terrain. The UGV can additionally be equipped with a camera (e.g., a Sony zoom camera) on a pan/tilt (e.g., a TRACIabs Biclops pan/tilt) mount to keep a target in view from the ground.
  • The present teachings provide a UAV and UGV team that can track and potentially engage a human or non-human target. In certain embodiments, a single operator can control one or more unmanned vehicles to perform the operations necessary to search for, track, monitor, and/or destroy selected targets. This functionality can be implemented in accordance with the present teachings by utilizing a Layered Sensing shared situational awareness system that can determine the location of targets using combined machine perception and human feedback. The Layered Sensing system has been defined (by AFRL) as a providing “military and homeland security decision makers at all levels with timely, actionable, trusted, and relevant information necessary for situational awareness to ensure their decisions achieve the desired military/humanitarian effects. Layered Sensing is characterized by the appropriate sensor or combination of sensors/platforms, infrastructure and exploitation capabilities to generate that situational awareness and directly support delivery of “tailored effects.” In accordance with various embodiments, the Layered Sensing system can direct an unmanned vehicle team to investigate a target and determine an optimal path to fly to view the target. It can also return views of the target from the air and the ground for operator (and other personnel) review. In conjunction with an a priori map and based on terrain data such as DTED terrain data, it can predict the target's location or assist an operator in predicting the targets location and, based on such prediction, determine an optimal path to fly to view the target.
  • In certain embodiments, if one of the unmanned vehicle team members flies to the predicted target location and cannot view the target, one or more of the unmanned vehicles in the team can utilize predictive algorithms in accordance with the present teachings to fly a search pattern to attempt to find the target. If the target is spotted by a team member, that team member—using its own GPS coordinates to determine GPS coordinates of the target—can send the target location to other team members. The UAV has mounted thereon one or more cameras that can, for example, be mounted in gimbals (e.g., a Cloud Cap Technology TASE gimbal) for optimal range of motion. If more than one camera is used, one camera can face forward and one camera can face to the side to keep the target in view. The cameras allow the UAV to keep the target in view. Another team member, such as an unmanned ground vehicle (UGV), can then navigate autonomously (or semi-autonomously with operator assistance) to the target location using, for example, GPS, INS, compass, and odometry for localization and LIDAR for obstacle avoidance. The LIDAR obstacle sensing can be integrated with terrain data from maps of from another source such as a team member. A path planning algorithm such as A* or a Rapidly-exploring Random Tree (RRT) can be utilized to plan a path to the target based on an a priori map. An RRT is a data structure and algorithm, widely used in robot path planning, designed for efficiently searching non-convex, high-dimensional search spaces. Simply put, the tree is constructed in such a way that any sample in the space is added by connecting it to the closest sample already in the tree.
  • When a team member arrives in a proximity of the target, the team member can use its camera to attain a close-up view of the target. Then, as the target moves, the unmanned vehicle team is controlled to best maintain a view of the target despite occluding obstacle, using a combination of the target prediction algorithms and local navigation behaviors such as obstacle avoidance.
  • Path Planning to Search for Target
  • When searching for a target, UAV team members that comprise fixed wing aircraft (such as, for example, an AeroVironment Raven or Dragon Eye (with autopilot as necessary)) cannot remain stationary and must orbit, and therefore should be capable of planning for occlusions and minimizing them. In accordance with certain embodiments of the present teachings occlusion planning and minimization can be accomplished as follows:
  • First, the system attempts to evaluate or predict where the target is likely to be within a short time horizon (e.g., one to two minutes) by computing a distribution pt(x) that gives a probability that the target is at x at time t. This can be accomplished, for example, by sampling from past observations of target tracks, a goal-oriented walking or running model for a target, and/or a model selection algorithm that chooses the best among these and other known models. The distribution can be represented and updated efficiently using particle filters, which is an extension of a Kalman-type filter to multi-modal distributions.
  • Once the system has evaluated or predicted where the target is likely to be within the short time horizon, it can then attempt to predict where unmanned vehicle team members can be positioned to best “see” a target. This computation can be based on a pre-computed visibility map and a distribution of where the target is likely to be. Given a distribution of the target position px,t(x) and a visibility map py|x(y|x) giving a probability that a target at x is visible from an unmanned vehicle (e.g., a UAV) at position y, the system calculates the probability py,t(y) that the target is visible from an unmanned vehicle at position y. These calculations or algorithms can be implemented on a graphic processor unit (GPU) such as Quantum3D's COTS GPU for real-time computation. A GPU is a dedicated graphics rendering device that is very efficient at manipulating and displaying computer graphics. Its highly parallel structure makes it more effective than general-purpose CPUs for a range of complex algorithms. The visibility map is computed ahead of time, so that at every position Py|x(y|x) can be represented for constant x as a polygon that is fast to compute. The GPU can be used to accumulate the polygons in a buffer to generate a visibility map, an example of which is illustrated in FIG. 1 and discussed in more detail below. The illustrated polygons are equally spaced on a grid, and each polygon represents the visibility at its center point. A full circle means that the area is unoccluded (in this map by the illustrated buildings). Otherwise, the polygon is effectively a radial plot representing the elevation angles from which the point is visible. Therefore, the smaller the polygon, the less its area is visible from the air.
  • Visibility codes are then generated for the area illustrated in FIG. 1. These visibility codes can be illustrated, for example, in eight different directions for which visibility can be evaluated. The visibility codes are illustrated in FIG. 2 for the eight different directions. Shades of gray determine the level of occlusion in the illustrated direction, darker areas representing more occlusion. Thus, darker areas are less visible from the given direction; i.e., the minimum elevation angle at which the sky is visible is higher the darker the area is. FIG. 3 represents a map that is cumulative of the directional visibility illustrated in FIG. 2. FIG. 3's visibility likelihood map assumes that the target could be anywhere in the map (i.e., assuming a uniform distribution of the position of the dismount on the ground). Lighter areas are positions from which one is less likely to see the target. Notice that the center area is lighter because of the greater number of occlusions caused by the buildings.
  • FIG. 5 illustrates a visibility likelihood map generated when a non-uniform distribution of possible target positions is known, as illustrated in FIG. 4. When a non-uniform distribution of possible target positions can be used, the area where the target is likely to be visible from can be understandable much more concentrated than in the map of FIG. 3.
  • The system next calculates a path for one or more unmanned vehicle team members that minimizes loss of the target. Paths are generated by the RRT and evaluated by the system and/or one or more operators to determine a path that minimizes a given criteria (e.g., the amount of time a target is lost). In certain embodiments of the present teachings, the system chooses a path y(t) that maximizes ∫pt[y(t)]dt, where the integral is performed over the time horizon.
  • Framework for Collaborative Unmanned Vehicle Planning
  • Choosing where an unmanned vehicle should go to find a target is a complicated decision that depends on where one believes the target is, and where it might go during the time it takes a team member to get to a viewing position. The present teachings provide the capability to evaluate a number of paths and choose a path or accept instructions from an operator regarding path choice. Once the path is selected, one or more unmanned vehicle team members are directed in accordance with those paths to execute autonomous navigation.
  • As stated above, the present teachings can combine RRTs to represent possible trajectories of the unmanned vehicles and Monte Carlo methods to represent the uncertainty about where the target is. Possible target trajectories are constructed over a finite time horizon and, during RRT traversal, the system tracks how many times it has seen a particle for each RRT node. This is because it can be disadvantageous to continue following a particle that has already been seen, and so a cost function can discount particles that have been seen more than once. This method can generate one or more paths that sweep out and attempt to consume the probability mass of where the target may be. Monte Carlo methods, with which those skilled in the art are familiar, are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used when simulating physical and mathematical systems, and when it is infeasible or impossible to compute an exact result. The present teachings contemplate, for example, evaluating more than 64,000 trajectories, and at each of the 128,000 RRT nodes, evaluating the visibility of particles, all at a rate of 1 Hz.
  • Visibility Map Generation Using GPUs
  • For map generation (in a timely manner), various embodiments of the present teachings perform visibility computations using DTED data (e.g., Level 4 data or higher ( 1/9th or 1/27th arc second spacing)) to create a map representing the visibility at each location. Performing these computations on a GPU allows rapid map generation and real-time calculation of visibility by rendering polygons representing visibility (see FIG. 1) and obtaining a map (e.g., a color-coded map) therefrom whose values (colors) tell the system and/or an operator how likely an unmanned vehicle team member, and particularly a UAV, is to be able to view a target from a given point.
  • In a color-coded visibility map that can be generated in accordance with the present teachings, the different colors utilized in the map can represent the direction from which the target is visible. For example, a red-colored area on the map can represent an area from which a target is visible to the east. Light blue, on the other hand, can indicate an area from which the target is visible to the west. Brighter color can, for example, indicate an area where an unmanned vehicle is more likely to see a target (from the color-indicated direction). Further, a mixture of colors can be used to indicate more than one direction from which the target may be visible. Such a map can be calculated for either a concentrated (there is some idea where the target is) or uniform (target could be anywhere) distribution of target position. Thus, if a potential target location is unknown, the system (e.g., the GPU) can nevertheless compute a best location by accumulating polygons over all possible positions. If the target location is known, the system (e.g., the GPU) can compute visibility maps several times per second.
  • As an alternative to, or in addition to such color-coded maps, a grey-scale visibility map can be generated and utilized, such as that illustrated in FIG. 6, which shows a likelihood that a target (dot at left side) can be viewed from any direction. A marginal probability of viewing the target from any direction from a given point is shown. Lighter indicates an increased chance of viewing the target. Thus, white areas correspond to viewpoints from which the target is likely to be viewed, and dark areas correspond to viewpoints from which the target is unlikely to be viewed. Such a map can be used for computing an A* or RRT path.
  • FIG. 7 illustrates a navigation cost map based on the values of FIG. 6, with equal cost contours. Thus, the visibility map is used as a cost map for computing an unmanned vehicle surveillance path, such as an A* path. The grayscale background of FIG. 7 shows the cost to come from any place on the map to the peak of the cost function. Contours of equal cost emanate from the starting position (the red dot in FIG. 6). White (lighter) areas correspond to a lower total travel cost (or higher likelihood of detection) and darker areas correspond to a higher total travel cost (lower likelihood of detection). The two paths traverse to the peak of the cost function, visiting the most likely areas for finding the target. Regarding the lower path, the planner could decide that, because of the large high probability area in the lower right corner where the target is likely to be seen, the unmanned vehicle should follow this path through the high probability area instead of taking the upper path, although the upper path may be shorter.
  • Thermal Vision Target Tracking
  • Certain embodiments of the present teachings additionally utilize thermal vision target tracking. Thermal vision target tracking can be accomplished, for example, by equipping one or more of the unmanned vehicle team members (e.g., a UGV) with a thermal infrared camera. The thermal infrared camera can comprise, for example, an FLIR Photon thermal imager. Thermal imaging is particularly useful for tracking human targets when the ambient temperature is less about 90 degrees. Presently, an effective imaging range for a thermal imager can be extended to about 30 meters.
  • When a target has been located via thermal imaging. Tracking software can apply thresholding to the thermal image to eliminate isolated pixels to filter noise. The centroid of the remaining points can then be used to determine a bearing to the target within the image plane. A following behavior can turn the UGV to face the target based on a horizontal coordinate of the centroid, and can maintain a desired distance from the target based on a vertical coordinate of the centroid (i.e., if the target is higher (father) in the image than desired, the UGV moves forward, and if the target is lower (nearer) in the image than desired, the UGV halts or moves backward. In this way, the UGV follows the target while maintaining a desired separation.
  • Certain embodiments of the present teachings can additionally or alternatively utilize thermal imaging with a UAV.
  • The Control Architecture
  • In accordance with certain embodiments of the present teachings, the control architecture comprised the following three primary parts: (1) a fully integrated architecture fusing the U.S. Army's Armament Research, Development and Engineering Center multi-platform controller (ARDEC MPC) architecture, a Mission Planner with collaborative engagement capabilities, and local Decentralized Data Fusion nodes on the unmanned vehicles; (2) a populated Mission Planner with target engagement-specific Mission Task Components, associated agents and defined interface(s) to integrate with the MPC architecture; and (3) a functional architecture decomposition of specific Mission Task Components to clarify how high level tasks are executed at the low level by the respective unmanned platforms. These parts are described in detail hereinbelow.
  • The present teachings contemplate many or all of the following functions being performed by the operator and/or members of the unmanned vehicle team. Hereinafter, the operator and members of the unmanned vehicle team are referred to as mission agents.
  • Agents to
    Engagement Perform
    Functions Function OCU/UAV/UGV Behaviors
    Maneuver to UGV, UAV Read in terrain/road network map
    get target Path plan from pt A to pt B
    in view Register self to map
    Maneuver map
    Update location on path plan
    Avoid obstacles to conduct path plan
    Detect Operator, Identify target of interest in image
    UGV
    Threat Operator Operator determines target thread in
    evaluation image
    OCU calculates target geo-position from
    sensor platform location, pointing and
    geo-referenced map
    Acquisition Operator, Track features of background image and
    UGV, UAV difference with target of interest
    Track UGV, UAV Reverse kinematics background features
    with vehicle motion model
    Generate estimated target position
    measurement
    Generate target track and uncertainty
    Correlate UGV, UAV Coalesce platform feature data
    track Register localized platform
    Match local platform tracks
    Fuse track UGV, UAV Update global track with respective tracks
    and uncertainty
  • In accordance with certain embodiments of the present teachings, simultaneous control of at least one UAV and at least one UGV is achieved from a single operator control unit (OCU), which can be achieved using waypoint navigation for both the UAV and UGV. Because the UAV and UGV may handle waypoints in different ways, the system can provide integration of waypoint control. For example, waypoint paths generated by the Supervisor OCU can be translated to appropriate UAV waypoint paths. Software tools can be employed for task allocation to support coordinated search, pursuit, and tracking of a target with unmanned vehicles.
  • The overall system in accordance with an exemplary embodiment of the present teachings comprises an iRobot PackBot UGV with, for example, a Navigator payload and sensor suite. The PackBot and its Navigator Payload sensor suite can operate using the Aware 2.0 robot control architecture. The PackBot, as illustrated in FIG. 8, is equipped with two main treads used for locomotion, and two articulated flippers having treads that are used to climb over obstacles. A PackBot can typically travel at sustained speeds of up to 4.5 mph. A PackBot's electronics are typically enclosed in a compact, hardened enclosure, and can comprise a 700 MHz mobile Pentium III with 256 MB SDRAM, a 300 MB compact flash memory storage device, and a 2.4 GHz 802.11b radio Ethernet.
  • The system can also comprise an AeroVironment Raven UAV that is back-packable and hand-launchable. A Raven is illustrated in FIG. 9. The Raven typically has a 90-minute flight duration and features EO/IR payloads and GPS. The Raven can be operated manually or programmed for autonomous operation using, for example, a laptop mission planner for processing and the Raven's advanced avionics and precise GPS navigation. The Raven has a wingspan of 4.5 feet and can weigh just over 4 lbs. It can be assembled in less than 30 seconds and supports aerial surveillance up to 10 km in line-of-sight range. The raven can travel at speeds of up to 50 knots. It can be equipped with forward-looking and side-looking camera ports.
  • FIG. 9 illustrates an architecture supporting an integrated system in accordance with various embodiments of the present teachings. The OCU Supervisor includes a Mission Planner with a collaborative engagement (CE) node. Combat Decision Aid Software (CDAS) and CDAS/C2 nodes (bottom left) provide mission-level commands to the Mission Planner CE node and can receive mission status, target information and event data from the Mission Planner. Mission-relevant image data, target data and unmanned vehicle data can be provided to the Situational Awareness (SA) server. The Mission Planner CE node is the central node that manages the overall unmanned system and decomposes the CDAS/C2 high level mission commands to appropriate unmanned system agents. The Mission Planner CE node functions are described in more detail hereinbelow. CDAS is a high-level mission planning, decision support tool providing simultaneous situational awareness, data sharing, and mission analysis for multiple combat units. CDAS provides libraries, functions, and capabilities that minimize redundant efforts and conflicting capabilities or efforts, and can assist in providing relevant, timely, and critical information to the operator.
  • In certain embodiments, a CDAS CoT component can be utilized to translate Aware 2.0 interface calls from the Supervisor OCU to Cot messages that are sent to CDAS, and to receive CoT messages from TCP and/or UDP and translate them to Aware 2.0 events/interface calls.
  • The Mission Planner conducts discrete management of tasks and assigns those tasks to the unmanned vehicles while the Decentralized Data Fusion (DDF) nodes manage, in a distributed fashion, low-level continuous execution of the tasks and coordinate shared data and discrete maneuvers. DDF function is described in detail hereinbelow. The illustrated architecture allows for the Mission Planner to handle contingency operations as they arise and respond to them by updating tasks to the team agents while the DDF nodes support tight collaboration and coordinated maneuvers to pursue and geo-locate the target.
  • The Mission Planner CE node can be separate from the OCUs from a functional and interface perspective. The software modules can be designed to be plug and play. Therefore, the Mission Planner module can have interfaces allowing it to be located in the OCU Supervisor or separated onto another piece of hardware. In fact, the Mission Planner node and the OCUs for both UAV(s) and UGV(s) are envisioned to be located in the same hardware unit, referred to herein as the “OCU Supervisor.” The architecture design can allow a single operator to monitor and control the mission through the OCU Supervisor. The collaborative software system can be quickly responsive to mission changes and replanning, while also reducing the complexity in the number of components and their respective interfaces. This is facilitated by the UAV and UGV systems supporting waypoint navigation.
  • In accordance with certain embodiments, the OCU Supervisor can display both video and telemetry data of each unmanned vehicle to the operator. It can also allow the operator to manually control each unmanned vehicle. In certain embodiments, while the OCU Supervisor includes the hardware typically used to manually operate the UGV, a separate hand controller can be utilized for manual control of the UAV. The exemplary architecture illustrated in FIG. 9 includes two UGVs, one UAV, one UGV OCU and one UAV OCU. The number of unmanned vehicles and OCUs may vary in accordance with the present teachings.
  • Tactical UAVs are typically designed for optimal endurance and hence minimized for weight. As a result, computing on the UAV platform is typically minimal. Most of the required collaborative DDF processing and coordinated navigation software will therefore be located on the UAV OCU, rather than on the UAV platform itself. On the other hand, tactical UGVs are typically not as constrained for weight and endurance and have significantly higher on-board processing capacity. In such a case, most all of the required collaborative DDF processing can occur on the UGV platform.
  • The exemplary architecture illustrated in FIG. 9 supports not only individual and coordinated control of the UAV and UGVs, but it also supports the UAV to act as a data relay. Joint Architecture for Unmanned System (JAUS) messages sent to the UAV can be passed through to the UGV for processing. Hence, a UAV data relay can significantly extend the control range of the UGV by at least an order of magnitude.
  • Mission Planner—Collaborative Engagement Architecture
  • In certain embodiments, the Mission Planner specifies the high-level mission to be executed, and the architecture in FIG. 10 illustrates the functional blocks that can be utilized to plan mission execution. The definition of the illustrated architecture is based on a defined mission planner framework that can be a modified version of the Overseer SBIR project, which can provide localization, path planning, waypoint navigation, an object avoidance for at least unmanned ground vehicles. This architecture can manage resources of the mission to optimize execution from a centralized planner at the system level. Mission Task Components (MTC) are tasks over which the Mission Planner has purview and are assigned to agents through a decision step in the Task Allocation Module.
  • The illustrated mission thread contains four agents: an operator; a UAV; and two UGVs. The capabilities and status of the operator and unmanned vehicles are recorded and continually updated in the Agent Capabilities Database. In accordance with various embodiments, if an unmanned vehicle has low battery power or has been damaged, the Agent Capabilities Database stored such information and can provide appropriate weighting to the agent's ability to perform a given task which will impact the Task Allocation decision.
  • MTC tasks are intended to manage the highest level of tasks for executing the collaborative engagement mission. These high-level tasks can be executed by individual agents or a combination of agents. In all cases, specific software modules will support each high-level MTC. As illustrated in FIG. 10, the primary MTCs to conduct a collaborative target engagement can be:
  • Manage Agent Resources
      • This task allows the Mission Planner to identify available agents, monitor the status of current agents, and acknowledge disabled agents in the mission. If agents are disabled or additional agents become available, the Mission Planner can either automatically update agent allocation or notify the human supervisor for further instruction.
  • Manage Communications
      • This task monitors the “health” of the communications structure given different RF environments and monitors the communications traffic between agents. If an agent arrives at a target for which more information is desired, this MTC may allocate increased bandwidth to that agent to transmit more data about that target. If another agent maneuvers into an area of increased multi-path interference, the Mission Planner can modify the channel allocation to improve signal power from the agent.
  • Search Area
      • This task is applies to the surveillance aspect of conducting a target engagement mission. The Search Area MTC can task an agent to conduct a defined search path through a predefined area of interest concurrent with that agent's capabilities as defined in the Agent Capabilities Database.
  • Pursue Target
      • This task applies to an agent that is not in the vicinity of the target but the target's location relative to the agent is known. The Pursue Target MTC can task an agent to direct its course toward the target's estimated location and navigate to the location.
  • Geolocate Target
      • This task applies to an agent that is in the vicinity of the target and is able to collect data on the target. The agent can apply onboard sensors to the target to collect positioning, state, or feature data for the target and provide the collected data to other agents and the Mission Planner.
  • In accordance with certain embodiments, the Task Allocation Module manages the execution of the collaborative engagement mission and assigns MTCs to appropriate agents given their capabilities. The Task Allocation Module can also allocate a sequence of multiple MTC tasks, as long as the assigned agent's capabilities support those tasks. The DDF algorithms, which can include a state machine on each agent, can support sequential execution of tasks with gating criteria to execute subsequent tasks. The Task Allocation Module can provide data to the MPC SA server, which can then provide information to the ARDEC architecture nodes as described above. This allows feedback to the ARDEC system for monitoring, situational awareness, and display.
  • Mission Task Component Functional Architecture
  • While the Mission Planner architecture provides a high-level view of the management of the overall collaborative engagement mission, functional state diagrams and a description of each MTC are provided below regarding software module design. The set of functions to accomplish a mission encompass the execution of simultaneous tasks as well as sequential tasks. While some tasks are executed independently, other tasks require collaboration with other unmanned vehicle agents. Tasks requiring collaboration among the unmanned vehicle agents are highlighted.
  • The Manage Agent Resources MTC and the Manage Communications MTC have common aspects relevant to the management of the overall system, independent of the specific mission to be executed. The functional architecture is primarily defined by the Mission Planner. The remaining three MTCs are specific for performing a target engagement mission and can therefore be more complex. The illustrated functional flow block architectures for these tasks define required functions among the unmanned vehicles and supervisory operator.
  • A Search Area MTC embodiment illustrated in FIG. 11 begins with selection of an area of interest from the supervisory operator. Depending on the positions and capabilities of the unmanned vehicles, either one type of unmanned vehicle or both types of unmanned vehicles can be assigned by the Mission Planner to search the area. The upper block specifies the Search Area MTC functions to be performed by a UAV, and the lower block specifies the Search Area MTC functions to be performed by a UGV. There can be significant functional similarities between the air and ground unmanned vehicles. One exception is that the UGV will more often encounter unanticipated obstacles. Thus, while the UGV will navigate with an on-board road network map, live conditions may include additional obstacles such as road traffic, crowds, rubble piles, etc., which the UGV will have to circumnavigate. This may not occur in every mission, and therefore the circumnavigation function is represented by a non-solid line. The follow-on task for circumnavigation is a collaborative task, Collaborate Path. This task has a bolded border to indicate that it has a separate functional block architecture, described below, which involves other agents aiding the UGV to navigate and re-route its path. In addition, the supervisory operator will monitor the unmanned agents' actions as they maneuver through their search patterns. The supervisor can, at any time, input waypoints to update the search pattern for any of the unmanned vehicles.
  • A Pursue Target MTC embodiment is illustrated in FIG. 12 and has a layout that is similar to the Search Area MTC. Initially, either (1) the target location is known by intelligence and the operator manually provides the target coordinates, or (2) the target is detected by the operator when viewing available image data and the operator selects the target to pursue. To pursue a target, each assigned unmanned vehicle (UAV functions are depicted in the upper box and UGV functions are depicted in the lower box) does not need to have the target of interest in its field of view. Rather, if the target is not in its field of view, it can find a path from its position to the target's estimated position, which can be provided by the fused DDF target track position from its neighboring DDF nodes. The fused DDF track can be generated by all available sensor measurements and intelligence on the target.
  • A Geolocate Target MTC embodiment is illustrated in FIG. 13 and has the highest number of tasks requiring collaboration and, therefore, the highest number of DDF software modules. The task of target selection is executed by the supervisory operator, denoted by the human icon. Target detection can occur in a different MTC, such as Pursue Target, but this function is addressed here for completeness in the event that other MTCs were not executed beforehand. The Mission Planner can assign available unmanned vehicles to geolocate a target if the target of interest is in the unmanned vehicle's camera view. If the Mission Planner designates a UAV to execute this MTC, then the sequence of tasks in the upper box is followed. The UGV sequence of tasks for geolocating a target are set forth in the lower box. Once the target of interest is specified in an image, the UGV can maintain track on the image in the 2D camera coordinate frame using, for example, Hough transforms, hysteresis and time-averaged correlation.
  • In certain embodiments of the present teachings, the UGV comes to a stop to eliminate noise before its on-board laser ranger or other functionality is able to accurately measure the range to the target. This range measurement is correlated with angle measurements from the image to estimate the target's position. A transformation to geocoordinates is calculated, and the target's track state can be either initialized or updated with this estimate. The UGV can then transmit information to the other DDF nodes, including to the operator's Supervisor OCU for displaying the target's updated track state. A fusion step can occur across all DDF nodes and the updated and integrated DDF fused track state can update the UGV's local track. The UGV can then rely on this updated fused track for directing the camera's pointing angle, for example via a pan/tilt mechanism, to ensure camera coverage of the target. If necessary, the UGV can navigate and pursue the target to maintain target ranging and observations. If the UGV, while in pursuit of the target, arrives at an obstacle that its obstacle detection/obstacle avoidance (ODOA) algorithm is unable to circumnavigate, the UGV can initiate the Collaborate Path DDF task to elicit aid from neighboring unmanned vehicle agents.
  • The Collaborate Path task, an embodiment of which is illustrated in FIG. 14, can be instantiated when a UGV automatically determines that it cannot execute a planned path due to an unanticipated blockage that it cannot circumnavigate. The UGV transmits a coded message to other DDF node agents seeking assistance. The other DDF nodes are able to determine which agent is best positioned to provide aid. This assisting agent can either be a UAV or UGV, which maneuvers toward the agent needing assistance. Assistance can include, for example, providing additional information regarding the size and location of the blockage, as well as alternative navigation routes. The present teachings contemplate a variety of techniques for detecting obstacles in the UGV's path. For example, imagery can be used by the operator to select obstacles that the blocked UGV should take into account. Alternatively or additionally, more sophisticated sensors and obstacle discrimination algorithms can automatically detect and recognize obstacles and provide blockage information, including geo-coordinates of the blockage, the type of blockage, the size of the blockage, etc.
  • When available, operator-selected obstacles from the image data can be converted to geo-coordinates. The geo-coordinates allow the obstructed UGV to recalculate its path plan. If the UGV is unable to reach a viable path plan solution, it can transmit a correction message to an assisting agent which can then continue maneuvers to provide additional blockage information. If the obstructed UGV is able to navigate with the revised path plan, it can transmit a message to the assisting agent indicating that it has successfully determined a revised route or cleared the obstruction.
  • Because the system architecture embodiment described herein provides the Mission Planner CE node at a high level—at the local nodes—the unmanned vehicle agents may take on low-level tasks in a decentralized fashion. The DDF nodes support autonomous collaboration for targeting, and can provide significant performance for target localization while keeping processing and bandwidth utilization at easily manageable levels.
  • Decentralized Data Fusion (DDF)
  • A decentralized data fusion network consists of a network of sensing nodes, each with its own processing facility, which do not require any central fusion or central communication facility. In various embodiments of the present teachings, the sensing nodes are all components containing DDF nodes, which include the OCUs and the unmanned vehicle platforms. In such a network, fusion occurs locally at each node on the basis of local observations and the information communicated from neighboring nodes. A decentralized data fusion network is characterized by three constraints:
      • 1. No one node should be central to the successful operation of the network.
      • 2. Nodes cannot broadcast results and communication must be kept on a strictly node-to-node basis,
      • 3. Sensor nodes do not have any global knowledge of sensor network topology, and nodes should only know about connections in their own neighborhood.
  • The constraints imposed provide a number of important characteristics for decentralized data fusion systems. Eliminating a central node and any common communication facility ensures that the system is scalable as there are no limits imposed by centralized computational bottlenecks or lack of communication bandwidth. Ensuring that no node is central and that no global knowledge of the network topology can allow fusion results in the system to survive the loss or addition of sensing nodes. The constraints also make the system highly resilient to dynamic changes in network structure. Because all fusion processes must take place locally at each sensor site through a common interface and no global knowledge of the network is required, nodes can be constructed and programmed in a modular reconfigurable fashion. Decentralized network are typically characterized as being modular, scalable, and survivable.
  • The DDF fusion architecture implements decentralized Bayesian estimation to fuse information between DDF nodes. Decentralized estimation schemes are derived by reformulating conventional estimators such as Kalman filters in Information or log-likelihood form. In this form, the fusion operation reduces to summation of its information sources. For networked implementations, this summation can be performed in an efficient decentralized manner by passing inter-node state information differences. This concept is shown in FIG. 15, which illustrates network fusion by propagating inter-node differences.
  • The higher number of fusion iterations and the more frequent this synchronization occurs, the more agents that share a common map with all the known target locations. The tempo of mission events, namely the speed at which a target or agents move, will impact the commonality of each platform's known locations of all participants in the mission.
  • In accordance with certain embodiments, the functional blocks required to implement this fusion process consist of sensor pre-processing, local state estimation, and inter-node DDF communication management. When actuation or mode selection that affects the sensor measurement quality is available, an additional control block is appropriate to direct sensing resources. These elements and their connections are shown in FIG. 16.
  • Each of the blocks illustrated in FIG. 16 is implemented as one or more software components that can communicate through standard network and inter-process protocols. The result is a highly flexible and reconfigurable system architecture. Component modules can be located and connected in a customizable manner that delivers the most appropriate system configuration. Examples include small expendable UAVs with limited computing power. A DDF structure can connect processed sensor and actuation signals wirelessly to a remote processor for processing, estimation, and control.
  • The DDF network integrates multiple estimates from multiple vehicles in a way that is simple, efficient, and decentralized. A decentralized fusion node for an unmanned vehicle agent is illustrated in FIG. 18. For every sensor, there is a DDF node with appropriate functional elements. Each node maintains a local estimate for the state of the target vehicle, which can include the target vehicle's position, its velocity, and other identifying information. When all nodes are connected to the network and there are a low number of nodes, a DDF Communication Manager can follow a simple rule: at every time step, each node communicates both updates to the local estimate state as well as uncertainty to its neighbors. These changes propagate through the network to inform all nodes on the connected network using peer-to-peer communication.
  • In general the network may experience changes in connectivity over time. Consistently handling changes in network and node connectivity requires more complex DDF communication management. Upon establishing a connection, each node performs an additional operation to determine estimate information shared in common with the new neighbor node. Exchanges in the local node's estimates are aggregated without double counting.
  • Collaborative Target Tracking Applied to Mission Task Components
  • In certain implementations of the present teachings, the operator utilizes the Supervisor OCU to manually detect one or more targets in received video data. In such implementations, the operator is relied on for target detection due to the large variety of adversary types that might need to be detected, as well as the broad range of backgrounds from which targets need to be identified. Once detected, the low-level task of tracking the target can be automated with software. Target tracking in EO and IR imagery, from one or more UAVs and/or one or more UGVs can utilize an algorithm that maintains an adaptive classifier separating the target from its background. The classifier decides which pixels belong to target, and which pixels belong to the background and is updated iteratively using a window around the target's current location. If the system is in danger of losing the target, either due to a potential for occlusion by known buildings or because the target becomes harder to distinguish from the background or other targets, the system can alert the operator that assistance is required. The goal is to minimize the amount of operator assistance necessary.
  • Approximate geolocation from UGVs can be estimated from heading and position information, as well as estimated pointing information from Pan-Tilt-Zoom cameras. Due to a dependence on the attitude of the vehicle, geolocation from UAV video can be more difficult without certain inertial systems or gimbaled cameras. Alternatively, geolocation for UAVs can be implemented by matching frames from UAV video to previously acquired aerial imagery, such as from recent satellite imagery. For a given area, a library of feature descriptors (e.g., large visible landmarks) is constructed. For each received image, feature detection is performed, the library is queried, and a location on the ground best matching the query image is chosen.
  • Failure detection, image stabilization, and improvements to operator target track initialization can improve target tracking performance for the unmanned vehicle agents. If target tracking is initialized by the operator with an over-sized image region, the tracker may confuse target characteristics with image background characteristics, leading to track loss. An under-sized image region may cause the tracker to reject or fail to incorporate certain target characteristics, which could result in tracking failure. Properly sizing of the tracker initialization region can be achieved in a variety of ways, including by operator training. In certain embodiments, and particularly for UAV tracking, utilizing both motion-based tracking and color-based tracking can improve overall tracking success for the system.
  • In certain embodiments of the present teachings, during a Pursue Target MTC, a DDF Estimation System uses measurements from ground and aerial agents to localize the target and then disseminates the target location information to be acted upon by the system's Collaborative Path planning systems. In various embodiments, the operator begins by designating where to look for targets, for example by drawing on a map displayed on the Supervisor OCU. The unmanned vehicle agents can then converge on the area, and the operator may, for example, choose to detect a target on the UAV video. The UAV DDF node's Automatic Target Tracking could then take over and track the target's position in the video. Several seconds later, a unique landmark in the scene can be found which uniquely identifies the area, so that the target location at that time can be geolocated. At this point, an estimate of the target's coordinate position is known. The Mission Planner can then initiate pursuit by unmanned vehicle agents (e.g., one or more UGVs) using the estimated position. Once in pursuit or when the target is in view, the one or more UGVs can provide their own estimates of the target's position. When these estimates become available, an ad-hoc network can be formed among the nodes, and DDF can take over aggregating the estimates into a single minimum variance estimate. During surveillance, if the original UAV loses its video connection, available UGVs can maintain video surveillance and continue tracking and updating target position.
  • In a Collaborate Path MTC, the responsibilities of the Distributed Estimation System are largely the same as in Pursue Target MTC for detection, geolocation and tracking. The purpose is to geolocate obstacles on the ground that are selected by the operator. This task can be simplified by assuming that the ground obstacles are constrained to be stationary. The notable difference is the indication that these obstacles are not targets of interest—rather they are “repulsive” targets in which the automatic path planning scheme of the UGV will reroute its path plan to select roads that do not contain those obstacles.
  • Supervisor OCU Interface
  • The Supervisor OCU interface facilitates the operator's management, command and control, and monitoring of mission execution. In accordance with certain embodiments of the present teachings, the Supervisor OCU display interface provides the operator with an intuitive understanding of mission status and expected execution of future agent actions. The use of certain mixed initiative approaches, such as dynamically accepting different levels and frequencies of intervention, self-recognition of needing assistance, and sharing of decision-making at specific levels, can assist the operator in managing a multi-unmanned vehicle mission.
  • Many display components (video data, status bars, and control buttons) can be configurable and, in certain embodiments, allow “drag and drop” placement for ease of use. The Supervisor OCU interface, an exemplary embodiment of which is illustrated in FIG. 19, can facilitate operator waypoint input for the unmanned vehicles to redirect their routes, selecting a specific unmanned ground vehicle to teleoperate, “freezing” UGVs, and putting UAVs in a holding pattern. The illustrated interface allows the use of drag strokes to control aspects of the unmanned vehicles. The mapping of click-drag strokes in specific areas of the display interface can facilitate controls of different vehicles, injection of waypoints, camera controls, and head-neck controls. The icons below the map view in the upper left allow the operator to inject waypoints simply by selecting a vehicle and then placing waypoints directly onto the map.
  • In accordance with various embodiments, the Supervisor OCU interface facilitates operator injection of high-level mission goals through interaction with the Mission Planner CE in the upper left section of the display. For example, in the case of the Search Area MTC, it is important to be able to able to quickly specify the area in which the target should be located. This interface can allow the operator to draw a polygon on a street map designating the area to be searched. This interface can also allow the operator to cue targets in the video streams emanating from the unmanned vehicle agents. Once the target has been specified, the vehicles will track the target autonomously or semi-autonomously. The interface can also integrate directives from the operator that keep the vehicle from going into certain areas. For example, if the operator sees an area that is blocked, the area can be marked as a NO-GO region by, for example, drawing on the map. Path planning can then automatically reroute any plans that might have required navigation through those areas.
  • As can be seen, icons representing available unmanned vehicle agents can be utilized in the map (upper left corner of display) to indicate the appropriate location of the represented unmanned vehicle agent on the map. In certain embodiments, updates and track history can be properly registered to each unmanned vehicle agent.
  • In certain embodiments of the present teachings, when an identified target has entered a building and been followed by a UGV, one or more UAVs can be directed by the system to orbit the building containing the target and determine if and when the target exits the building. Additional UGVs may be patrolling the perimeter of the building on the ground. If and when the target exits the building, an orbiting UAV that discovers the exit can inform other agents of the exit. The UGV that followed the target into the building can then exit the building, attempt to obtain line-of-sight to the target, and again follow the target. While this is occurring, other unmanned vehicle team members collaborate to maintain line-of-sight with the exited target. Alternatively, another UGV could obtain line-of-sight to the target and begin following the target, in which case the system may or may not instruct the original UGV to also find and follow the target, depending on mission parameters and/or operator decision making.
  • Other embodiments of the present teachings will be apparent to those skilled in the art from consideration of the specification and practice of the present teachings disclosed herein. For example, the present teachings could be used for long-term planning (e.g., the horizon for planning spans over minutes rather than seconds) in addition to short-term planning. It is intended that the specification and examples be considered as exemplary only.

Claims (20)

1. A method for controlling unmanned vehicles to maintain line-of-sight between a predetermined target and at least one of the unmanned vehicles, the method comprising:
providing at least one unmanned air vehicle including sensors configured to locate a target and at least one unmanned ground vehicle including sensors configured to locate and track a target;
communicating and exchanging data, using a controller, to and among the at least one unmanned air vehicle and the at least one unmanned ground vehicle;
controlling, using a controller, the at least one unmanned air vehicle and the at least one unmanned ground vehicle to maintain line-of-sight between the predetermined target and at least one of the unmanned air vehicles;
geolocating the predetermined target with the unmanned air vehicle using information regarding a position of the unmanned air vehicle and information regarding a position of the predetermined target relative to the unmanned air vehicle; and
transmitting information defining the geolocation of the predetermined target to the unmanned ground vehicle so that the unmanned ground vehicle can perform path planning based on the geolocation.
2. The method of claim 1, wherein the controller is an operator control unit.
3. The method of claim 2, wherein an operator identifies the predetermined target via the operator control unit.
4. The method of claim 1, wherein, when a first unmanned vehicle has line-of-sight to the predetermined target, another unmanned vehicle utilizes information regarding the position of the first unmanned vehicle and information regarding a position of the predetermined target relative to the first unmanned vehicle to plan a path to reach a position that has or will have line-of-sight to the predetermined target.
5. The method of claim 4, wherein the position that has or will have line-of-sight to the predetermined target takes into account a projected path of the predetermined target.
6. The method of claim 1, further comprising geolocating the predetermined target with the unmanned air vehicle and transmitting information regarding the position of the unmanned air vehicle and information regarding a position of the predetermined target relative to the unmanned air vehicle to the unmanned ground vehicle so that the unmanned ground vehicle can do path planning based on a geolocation of the predetermined target.
7. The method of claim 6, further comprising sending updated information regarding a position of the predetermined target to the unmanned ground vehicle at predetermined intervals.
8. The method of claim 1, wherein the unmanned air vehicle orbits a building containing the predetermined target and determines if the predetermined target exits the building.
9. The method of claim 8, further comprising the unmanned air vehicle sending information regarding predetermined target building entry and exit to one or more unmanned ground vehicles that can surround and/or enter the building to follow the predetermined target.
10. The method of claim 1, further comprising controlling the unmanned vehicles to obtain or maintain line-of-sight using waypoint navigation.
11. The method of claim 1, further comprising controlling the unmanned vehicles to obtain or maintain line-of-sight using path planning.
12. The method of claim 1, further comprising controlling the unmanned vehicles to obtain or maintain line-of-sight using an object avoidance behavior.
13. The method of claim 1, further comprising controlling the unmanned vehicles to obtain or maintain line-of-sight using.
14. The method of claim 1, further comprising allowing an operator override to control the unmanned vehicles.
15. The method of claim 14, further comprising allowing an operator to override waypoint navigation.
16. The method of claim 14, further comprising controlling the unmanned vehicles to assist the operator in searching for a target.
17. The method of claim 16, further comprising the operator designating an area in which the unmanned vehicles navigate to assist the operator in searching for a target.
18. The method of claim 16, further comprising the operator designating an area in which the unmanned vehicles do not navigate to assist the operator in searching for a target or maintain line-of-sight with a target.
19. A collaborative engagement system comprising:
at least one unmanned air vehicle including sensors configured to locate a target and at least one unmanned ground vehicle including sensors configured to locate and track a target; and
a controller facilitating control of, and communication and exchange of data to and among the unmanned vehicles,
wherein the collaborative engagement system controls the unmanned vehicles to maintain line-of-sight between a predetermined target and at least one of the unmanned vehicles, geolocating the predetermined target with the unmanned air vehicle and transmitting information defining the geolocation of the predetermined target to the unmanned ground vehicle so that the unmanned ground vehicle can perform path planning based on the geolocation.
20. The method of claim 19, wherein an operator identifies the predetermined target via the controller.
US13/546,787 2008-03-16 2012-07-11 Collaborative Engagement for Target Identification and Tracking Abandoned US20120290152A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/546,787 US20120290152A1 (en) 2008-03-16 2012-07-11 Collaborative Engagement for Target Identification and Tracking

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US3698808P 2008-03-16 2008-03-16
US12/405,207 US8244469B2 (en) 2008-03-16 2009-03-16 Collaborative engagement for target identification and tracking
US13/546,787 US20120290152A1 (en) 2008-03-16 2012-07-11 Collaborative Engagement for Target Identification and Tracking

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/405,207 Continuation US8244469B2 (en) 2008-03-16 2009-03-16 Collaborative engagement for target identification and tracking

Publications (1)

Publication Number Publication Date
US20120290152A1 true US20120290152A1 (en) 2012-11-15

Family

ID=41531021

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/405,207 Active 2030-12-19 US8244469B2 (en) 2008-03-16 2009-03-16 Collaborative engagement for target identification and tracking
US13/546,787 Abandoned US20120290152A1 (en) 2008-03-16 2012-07-11 Collaborative Engagement for Target Identification and Tracking

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/405,207 Active 2030-12-19 US8244469B2 (en) 2008-03-16 2009-03-16 Collaborative engagement for target identification and tracking

Country Status (1)

Country Link
US (2) US8244469B2 (en)

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110029234A1 (en) * 2009-07-29 2011-02-03 Lockheed Martin Corporation Threat Analysis Toolkit
US20130021475A1 (en) * 2011-07-21 2013-01-24 Canant Ross L Systems and methods for sensor control
US20140025235A1 (en) * 2012-07-17 2014-01-23 Elwha LLC, a limited liability company of the State of Delaware Unmanned device utilization methods and systems
US20140094997A1 (en) * 2012-09-28 2014-04-03 Elwha Llc Automated Systems, Devices, and Methods for Transporting and Supporting Patients Including Multi-Floor Operation
US20140094990A1 (en) * 2012-09-28 2014-04-03 Elwha Llc Automated Systems, Devices, and Methods for Transporting and Supporting Patients
US20140098990A1 (en) * 2012-10-09 2014-04-10 The Boeing Company Distributed Position Identification
US8935035B1 (en) * 2011-03-31 2015-01-13 The United States Of America As Represented By The Secretary Of The Army Advanced optimization framework for air-ground persistent surveillance using unmanned vehicles
US20150161484A1 (en) * 2007-05-25 2015-06-11 Definiens Ag Generating an Anatomical Model Using a Rule-Based Segmentation and Classification Process
US9254363B2 (en) 2012-07-17 2016-02-09 Elwha Llc Unmanned device interaction methods and systems
WO2016093908A3 (en) * 2014-09-05 2016-08-04 Precisionhawk Usa Inc. Automated un-manned air traffic control system
KR20160097873A (en) * 2015-02-10 2016-08-18 한화테크윈 주식회사 Monitoring and tracking system
US9454907B2 (en) * 2015-02-07 2016-09-27 Usman Hafeez System and method for placement of sensors through use of unmanned aerial vehicles
DE102015007156A1 (en) * 2015-06-03 2016-12-08 Audi Ag A method in which an unmanned aerial vehicle interacts with a motor vehicle and motor vehicle
CN107544551A (en) * 2017-09-01 2018-01-05 北方工业大学 Regional rapid logistics transportation method based on intelligent unmanned aerial vehicle
US9875403B2 (en) * 2012-12-12 2018-01-23 Thales Method for accurately geolocating an image sensor installed on board an aircraft
US20180082135A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Vehicle Video System
US10188580B2 (en) * 2016-05-09 2019-01-29 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for providing environment information using an unmanned vehicle
CN109445444A (en) * 2018-12-25 2019-03-08 同济大学 A kind of barrier concentrates the robot path generation method under environment
WO2019089015A1 (en) * 2017-10-31 2019-05-09 Nissan North America, Inc. Autonomous vehicle operation with explicit occlusion reasoning
CN111078250A (en) * 2019-11-14 2020-04-28 新石器慧通(北京)科技有限公司 Equipment firmware upgrading method and system of movable carrier and unmanned vehicle
US10654476B2 (en) 2017-02-10 2020-05-19 Nissan North America, Inc. Autonomous vehicle operational management control
US10805899B2 (en) 2015-12-18 2020-10-13 At&T Intellectual Property I, L.P. Location assessment system for drones
US10836405B2 (en) 2017-10-30 2020-11-17 Nissan North America, Inc. Continual planning and metareasoning for controlling an autonomous vehicle
US11027751B2 (en) 2017-10-31 2021-06-08 Nissan North America, Inc. Reinforcement and model learning for vehicle operation
WO2021090312A3 (en) * 2019-11-06 2021-06-17 Israel Aerospace Industries Ltd. Line of sight maintenance during object tracking
US11084504B2 (en) 2017-11-30 2021-08-10 Nissan North America, Inc. Autonomous vehicle operational management scenarios
US11110941B2 (en) 2018-02-26 2021-09-07 Renault S.A.S. Centralized shared autonomous vehicle operational management
US11113973B2 (en) 2017-02-10 2021-09-07 Nissan North America, Inc. Autonomous vehicle operational management blocking monitoring
US11120688B2 (en) 2018-06-29 2021-09-14 Nissan North America, Inc. Orientation-adjust actions for autonomous vehicle operational management
US11215998B2 (en) * 2016-12-21 2022-01-04 Vorwerk & Co. Interholding Gmbh Method for the navigation and self-localization of an autonomously moving processing device
US11300957B2 (en) 2019-12-26 2022-04-12 Nissan North America, Inc. Multiple objective explanation and control interface design
US11334069B1 (en) * 2013-04-22 2022-05-17 National Technology & Engineering Solutions Of Sandia, Llc Systems, methods and computer program products for collaborative agent control
EP3821313A4 (en) * 2018-07-12 2022-07-27 Terraclear Inc. Object identification and collection system and method
US11500380B2 (en) 2017-02-10 2022-11-15 Nissan North America, Inc. Autonomous vehicle operational management including operating a partially observable Markov decision process model instance
US11577746B2 (en) 2020-01-31 2023-02-14 Nissan North America, Inc. Explainability of autonomous vehicle decision making
US11613269B2 (en) 2019-12-23 2023-03-28 Nissan North America, Inc. Learning safety and human-centered constraints in autonomous vehicles
US11635758B2 (en) 2019-11-26 2023-04-25 Nissan North America, Inc. Risk aware executor with action set recommendations
US11714971B2 (en) 2020-01-31 2023-08-01 Nissan North America, Inc. Explainability of autonomous vehicle decision making
US11782438B2 (en) 2020-03-17 2023-10-10 Nissan North America, Inc. Apparatus and method for post-processing a decision-making model of an autonomous vehicle using multivariate data
US11789003B1 (en) 2018-10-31 2023-10-17 United Services Automobile Association (Usaa) Water contamination detection system
WO2023204821A1 (en) * 2022-04-22 2023-10-26 Electric Sheep Robotics, Inc. Navigating an unmanned ground vehicle
US11854262B1 (en) * 2018-10-31 2023-12-26 United Services Automobile Association (Usaa) Post-disaster conditions monitoring system using drones
US11874120B2 (en) 2017-12-22 2024-01-16 Nissan North America, Inc. Shared autonomous vehicle operational management
US11900786B1 (en) 2018-10-31 2024-02-13 United Services Automobile Association (Usaa) Electrical power outage detection system
US11899454B2 (en) 2019-11-26 2024-02-13 Nissan North America, Inc. Objective-based reasoning in autonomous vehicle decision-making
US11958183B2 (en) 2020-09-18 2024-04-16 The Research Foundation For The State University Of New York Negotiation-based human-robot collaboration via augmented reality

Families Citing this family (203)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040162637A1 (en) 2002-07-25 2004-08-19 Yulun Wang Medical tele-robotic system with a master remote station with an arbitrator
US7813836B2 (en) 2003-12-09 2010-10-12 Intouch Technologies, Inc. Protocol for a remotely controlled videoconferencing robot
US8077963B2 (en) 2004-07-13 2011-12-13 Yulun Wang Mobile robot with a head-based movement mapping scheme
US9198728B2 (en) 2005-09-30 2015-12-01 Intouch Technologies, Inc. Multi-camera mobile teleconferencing platform
US8849679B2 (en) 2006-06-15 2014-09-30 Intouch Technologies, Inc. Remote controlled robot system that provides medical images
US8220710B2 (en) * 2006-06-19 2012-07-17 Kiva Systems, Inc. System and method for positioning a mobile drive unit
US7912574B2 (en) 2006-06-19 2011-03-22 Kiva Systems, Inc. System and method for transporting inventory items
US20130302132A1 (en) 2012-05-14 2013-11-14 Kiva Systems, Inc. System and Method for Maneuvering a Mobile Drive Unit
US9160783B2 (en) 2007-05-09 2015-10-13 Intouch Technologies, Inc. Robot system that operates through a network firewall
US10875182B2 (en) 2008-03-20 2020-12-29 Teladoc Health, Inc. Remote presence system mounted to operating room hardware
US8179418B2 (en) 2008-04-14 2012-05-15 Intouch Technologies, Inc. Robotic based health care system
US8170241B2 (en) 2008-04-17 2012-05-01 Intouch Technologies, Inc. Mobile tele-presence system with a microphone system
JP5215740B2 (en) * 2008-06-09 2013-06-19 株式会社日立製作所 Mobile robot system
US9193065B2 (en) 2008-07-10 2015-11-24 Intouch Technologies, Inc. Docking system for a tele-presence robot
US9842192B2 (en) 2008-07-11 2017-12-12 Intouch Technologies, Inc. Tele-presence robot system with multi-cast features
US8340819B2 (en) 2008-09-18 2012-12-25 Intouch Technologies, Inc. Mobile videoconferencing robot system with network adaptive driving
US8996165B2 (en) 2008-10-21 2015-03-31 Intouch Technologies, Inc. Telepresence robot with a camera boom
US9138891B2 (en) 2008-11-25 2015-09-22 Intouch Technologies, Inc. Server connectivity control for tele-presence robot
US8463435B2 (en) 2008-11-25 2013-06-11 Intouch Technologies, Inc. Server connectivity control for tele-presence robot
US8849680B2 (en) 2009-01-29 2014-09-30 Intouch Technologies, Inc. Documentation through a remote presence robot
US20100228406A1 (en) * 2009-03-03 2010-09-09 Honeywell International Inc. UAV Flight Control Method And System
WO2010118470A1 (en) * 2009-04-17 2010-10-21 The University Of Sydney Drill hole planning
US8897920B2 (en) 2009-04-17 2014-11-25 Intouch Technologies, Inc. Tele-presence robot system with software modularity, projector and laser pointer
US8350749B1 (en) * 2009-04-29 2013-01-08 The United States Of America As Represented By The Secretary Of The Air Force Radar signature database validation for automatic target recognition
US8872693B1 (en) 2009-04-29 2014-10-28 The United States of America as respresented by the Secretary of the Air Force Radar signature database validation for automatic target recognition
US8634982B2 (en) * 2009-08-19 2014-01-21 Raytheon Company System and method for resource allocation and management
US8473101B2 (en) * 2009-08-21 2013-06-25 Harris Corporation Coordinated action robotic system and related methods
US8384755B2 (en) 2009-08-26 2013-02-26 Intouch Technologies, Inc. Portable remote presence robot
US11399153B2 (en) 2009-08-26 2022-07-26 Teladoc Health, Inc. Portable telepresence apparatus
KR101239866B1 (en) * 2009-09-01 2013-03-06 한국전자통신연구원 Method and apparatus for birds control using mobile robot
EP2493664B1 (en) * 2009-10-27 2019-02-20 Battelle Memorial Institute Semi-autonomous multi-use robot system and method of operation
US9163909B2 (en) * 2009-12-11 2015-10-20 The Boeing Company Unmanned multi-purpose ground vehicle with different levels of control
US20110153079A1 (en) * 2009-12-18 2011-06-23 Electronics And Telecommunication Research Institute Apparatus and method for distributing and monitoring robot application and robot driven thereby
US11154981B2 (en) 2010-02-04 2021-10-26 Teladoc Health, Inc. Robot user interface for telepresence robot system
US8525834B2 (en) * 2010-02-17 2013-09-03 Lockheed Martin Corporation Voxel based three dimensional virtual environments
US8670017B2 (en) 2010-03-04 2014-03-11 Intouch Technologies, Inc. Remote presence system including a cart that supports a robot face and an overhead camera
US8786845B2 (en) 2010-04-08 2014-07-22 Navteq B.V. System and method of generating and using open sky data
US10343283B2 (en) 2010-05-24 2019-07-09 Intouch Technologies, Inc. Telepresence robot system that can be accessed by a cellular phone
US10808882B2 (en) 2010-05-26 2020-10-20 Intouch Technologies, Inc. Tele-robotic system with a robot face placed on a chair
US9681065B2 (en) * 2010-06-15 2017-06-13 Flir Systems, Inc. Gimbal positioning with target velocity compensation
WO2012002976A1 (en) * 2010-07-01 2012-01-05 Mearthane Products Corporation High performance resilient skate wheel with compression modulus gradient
US8781629B2 (en) * 2010-09-22 2014-07-15 Toyota Motor Engineering & Manufacturing North America, Inc. Human-robot interface apparatuses and methods of controlling robots
EP2444871A1 (en) * 2010-10-19 2012-04-25 BAE Systems Plc Sensor positioning for target tracking
WO2012052738A1 (en) * 2010-10-19 2012-04-26 Bae Systems Plc Sensor positioning for target tracking
US9264664B2 (en) 2010-12-03 2016-02-16 Intouch Technologies, Inc. Systems and methods for dynamic bandwidth allocation
US8442790B2 (en) * 2010-12-03 2013-05-14 Qbotix, Inc. Robotic heliostat calibration system and method
GB2487529A (en) * 2011-01-19 2012-08-01 Automotive Robotic Industry Ltd Security system for controlling a plurality of unmanned ground vehicles
US8918230B2 (en) * 2011-01-21 2014-12-23 Mitre Corporation Teleoperation of unmanned ground vehicle
US9323250B2 (en) 2011-01-28 2016-04-26 Intouch Technologies, Inc. Time-dependent navigation of telepresence robots
EP2668008A4 (en) 2011-01-28 2018-01-24 Intouch Technologies, Inc. Interfacing with a mobile telepresence robot
US8396730B2 (en) * 2011-02-14 2013-03-12 Raytheon Company System and method for resource allocation and management
US8660338B2 (en) 2011-03-22 2014-02-25 Honeywell International Inc. Wide baseline feature matching using collobrative navigation and digital terrain elevation data constraints
US8868323B2 (en) 2011-03-22 2014-10-21 Honeywell International Inc. Collaborative navigation using conditional updates
WO2012142587A1 (en) * 2011-04-15 2012-10-18 Irobot Corporation Method for path generation for an end effector of a robot
US10769739B2 (en) 2011-04-25 2020-09-08 Intouch Technologies, Inc. Systems and methods for management of information among medical providers and facilities
US20140139616A1 (en) 2012-01-27 2014-05-22 Intouch Technologies, Inc. Enhanced Diagnostics for a Telepresence Robot
US9098611B2 (en) 2012-11-26 2015-08-04 Intouch Technologies, Inc. Enhanced video interaction for a user interface of a telepresence network
TW201249713A (en) * 2011-06-02 2012-12-16 Hon Hai Prec Ind Co Ltd Unmanned aerial vehicle control system and method
CN102809969A (en) * 2011-06-03 2012-12-05 鸿富锦精密工业(深圳)有限公司 Unmanned aerial vehicle control system and method
US20120316680A1 (en) * 2011-06-13 2012-12-13 Microsoft Corporation Tracking and following of moving objects by a mobile robot
US8904517B2 (en) 2011-06-28 2014-12-02 International Business Machines Corporation System and method for contexually interpreting image sequences
JP5892360B2 (en) 2011-08-02 2016-03-23 ソニー株式会社 Robot instruction apparatus, robot instruction method, program, and communication system
US8836751B2 (en) 2011-11-08 2014-09-16 Intouch Technologies, Inc. Tele-presence system with a user interface that displays different communication links
CN102506867B (en) * 2011-11-21 2014-07-30 清华大学 SINS (strap-down inertia navigation system)/SMANS (scene matching auxiliary navigation system) combined navigation method based on Harris comer matching and combined navigation system
CN102506868B (en) * 2011-11-21 2014-03-12 清华大学 SINS (strap-down inertia navigation system)/SMANS (scene matching auxiliary navigation system)/TRNS (terrain reference navigation system) combined navigation method based on federated filtering and system
TW201328344A (en) * 2011-12-27 2013-07-01 Hon Hai Prec Ind Co Ltd System and method for controlling a unmanned aerial vehicle to capture images of a target location
CN103188431A (en) * 2011-12-27 2013-07-03 鸿富锦精密工业(深圳)有限公司 System and method for controlling unmanned aerial vehicle to conduct image acquisition
US8874266B1 (en) * 2012-01-19 2014-10-28 Google Inc. Enhancing sensor data by coordinating and/or correlating data attributes
US8620464B1 (en) * 2012-02-07 2013-12-31 The United States Of America As Represented By The Secretary Of The Navy Visual automated scoring system
US8788121B2 (en) 2012-03-09 2014-07-22 Proxy Technologies, Inc. Autonomous vehicle and method for coordinating the paths of multiple autonomous vehicles
US8874360B2 (en) 2012-03-09 2014-10-28 Proxy Technologies Inc. Autonomous vehicle and method for coordinating the paths of multiple autonomous vehicles
KR101970962B1 (en) * 2012-03-19 2019-04-22 삼성전자주식회사 Method and apparatus for baby monitering
JP5724919B2 (en) * 2012-03-22 2015-05-27 トヨタ自動車株式会社 Orbit generation device, moving body, orbit generation method and program
US9251313B2 (en) 2012-04-11 2016-02-02 Intouch Technologies, Inc. Systems and methods for visualizing and managing telepresence devices in healthcare networks
US8902278B2 (en) 2012-04-11 2014-12-02 Intouch Technologies, Inc. Systems and methods for visualizing and managing telepresence devices in healthcare networks
US9235212B2 (en) * 2012-05-01 2016-01-12 5D Robotics, Inc. Conflict resolution based on object behavioral determination and collaborative relative positioning
US9361021B2 (en) 2012-05-22 2016-06-07 Irobot Corporation Graphical user interfaces including touchpad driving interfaces for telemedicine devices
WO2013176758A1 (en) 2012-05-22 2013-11-28 Intouch Technologies, Inc. Clinical workflows utilizing autonomous and semi-autonomous telemedicine devices
US8682504B2 (en) * 2012-06-04 2014-03-25 Rockwell Collins, Inc. System and method for developing dynamic positional database for air vehicles and terrain features
ES2394540B1 (en) * 2012-07-26 2013-12-11 Geonumerics, S.L. PROCEDURE FOR THE ACQUISITION AND PROCESSING OF GEOGRAPHICAL INFORMATION OF A TRAJECT
KR102142162B1 (en) 2012-08-27 2020-09-14 에이비 엘렉트로룩스 Robot positioning system
US8442765B1 (en) 2012-11-08 2013-05-14 Honeywell International Inc. Shared state selection and data exchange for collaborative navigation using conditionally independent parallel filters
US9104752B2 (en) 2012-11-16 2015-08-11 Honeywell International Inc. Data sharing among conditionally independent parallel filters
US9417351B2 (en) * 2012-12-21 2016-08-16 Cgg Services Sa Marine seismic surveys using clusters of autonomous underwater vehicles
US9044863B2 (en) * 2013-02-06 2015-06-02 Steelcase Inc. Polarized enhanced confidentiality in mobile camera applications
WO2014169944A1 (en) 2013-04-15 2014-10-23 Aktiebolaget Electrolux Robotic vacuum cleaner with protruding sidebrush
EP2986192B1 (en) 2013-04-15 2021-03-31 Aktiebolaget Electrolux Robotic vacuum cleaner
US9070289B2 (en) * 2013-05-10 2015-06-30 Palo Alto Research Incorporated System and method for detecting, tracking and estimating the speed of vehicles from a mobile platform
US9025825B2 (en) 2013-05-10 2015-05-05 Palo Alto Research Center Incorporated System and method for visual motion based object segmentation and tracking
US9696430B2 (en) * 2013-08-27 2017-07-04 Massachusetts Institute Of Technology Method and apparatus for locating a target using an autonomous unmanned aerial vehicle
US10112710B2 (en) 2013-10-15 2018-10-30 Elwha Llc Motor vehicle with captive aircraft
KR102355046B1 (en) 2013-10-31 2022-01-25 에어로바이론먼트, 인크. Interactive weapon targeting system displaying remote sensed image of target area
US20150134384A1 (en) * 2013-11-08 2015-05-14 Sharper Shape Ltd. System and method for allocating resources
EP2879012A1 (en) * 2013-11-29 2015-06-03 The Boeing Company System and method for commanding a payload of an aircraft
WO2015082595A1 (en) * 2013-12-06 2015-06-11 Bae Systems Plc Imaging method and apparatus
EP2881827A1 (en) * 2013-12-06 2015-06-10 BAE Systems PLC Imaging method and apparatus
WO2015082597A1 (en) 2013-12-06 2015-06-11 Bae Systems Plc Payload delivery
US10203691B2 (en) 2013-12-06 2019-02-12 Bae Systems Plc Imaging method and apparatus
ES2656664T3 (en) 2013-12-19 2018-02-28 Aktiebolaget Electrolux Robotic cleaning device with perimeter registration function
CN105813528B (en) 2013-12-19 2019-05-07 伊莱克斯公司 The barrier sensing of robotic cleaning device is creeped
EP3084540B1 (en) 2013-12-19 2021-04-14 Aktiebolaget Electrolux Robotic cleaning device and operating method
JP6638988B2 (en) 2013-12-19 2020-02-05 アクチエボラゲット エレクトロルックス Robot vacuum cleaner with side brush and moving in spiral pattern
US10433697B2 (en) 2013-12-19 2019-10-08 Aktiebolaget Electrolux Adaptive speed control of rotating side brush
WO2015090399A1 (en) 2013-12-19 2015-06-25 Aktiebolaget Electrolux Robotic cleaning device and method for landmark recognition
EP3084539B1 (en) 2013-12-19 2019-02-20 Aktiebolaget Electrolux Prioritizing cleaning areas
US10231591B2 (en) 2013-12-20 2019-03-19 Aktiebolaget Electrolux Dust container
JP6340824B2 (en) * 2014-02-25 2018-06-13 村田機械株式会社 Autonomous vehicle
ES2681802T3 (en) 2014-07-10 2018-09-17 Aktiebolaget Electrolux Method to detect a measurement error in a robotic cleaning device
EP3190938A1 (en) 2014-09-08 2017-07-19 Aktiebolaget Electrolux Robotic vacuum cleaner
CN106659345B (en) 2014-09-08 2019-09-03 伊莱克斯公司 Robotic vacuum cleaner
US9057508B1 (en) 2014-10-22 2015-06-16 Codeshelf Modular hanging lasers to enable real-time control in a distribution center
US10334158B2 (en) * 2014-11-03 2019-06-25 Robert John Gove Autonomous media capturing
US10375359B1 (en) * 2014-11-12 2019-08-06 Trace Live Network Inc. Visually intelligent camera device with peripheral control outputs
US9752878B2 (en) * 2014-12-09 2017-09-05 Sikorsky Aircraft Corporation Unmanned aerial vehicle control handover planning
KR101664614B1 (en) * 2014-12-09 2016-10-24 한국항공우주연구원 Manless flight sensing apparatus and sniper weapon for realizing to situation of enemy position, image collection method
EP3230814B1 (en) 2014-12-10 2021-02-17 Aktiebolaget Electrolux Using laser sensor for floor type detection
CN107072454A (en) 2014-12-12 2017-08-18 伊莱克斯公司 Side brush and robot cleaner
JP6532530B2 (en) 2014-12-16 2019-06-19 アクチエボラゲット エレクトロルックス How to clean a robot vacuum cleaner
KR102339531B1 (en) 2014-12-16 2021-12-16 에이비 엘렉트로룩스 Experience-based roadmap for a robotic cleaning device
EP3043202B1 (en) * 2015-01-09 2019-07-24 Ricoh Company, Ltd. Moving body system
US20160267669A1 (en) * 2015-03-12 2016-09-15 James W. Justice 3D Active Warning and Recognition Environment (3D AWARE): A low Size, Weight, and Power (SWaP) LIDAR with Integrated Image Exploitation Processing for Diverse Applications
DK3274660T3 (en) 2015-03-25 2021-08-23 Aerovironment Inc MACHINE-TO-MACHINE TARGET SEARCH MAINTENANCE OF POSITIVE IDENTIFICATION
US9327397B1 (en) 2015-04-09 2016-05-03 Codeshelf Telepresence based inventory pick and place operations through robotic arms affixed to each row of a shelf
US11099554B2 (en) 2015-04-17 2021-08-24 Aktiebolaget Electrolux Robotic cleaning device and a method of controlling the robotic cleaning device
US9262741B1 (en) 2015-04-28 2016-02-16 Codeshelf Continuous barcode tape based inventory location tracking
KR101662032B1 (en) * 2015-04-28 2016-10-10 주식회사 유브이코어 UAV Aerial Display System for Synchronized with Operators Gaze Direction
KR102527245B1 (en) * 2015-05-20 2023-05-02 주식회사 윌러스표준기술연구소 A drone and a method for controlling thereof
US9745060B2 (en) * 2015-07-17 2017-08-29 Topcon Positioning Systems, Inc. Agricultural crop analysis drone
EP3121675B1 (en) * 2015-07-23 2019-10-02 The Boeing Company Method for positioning aircrafts based on analyzing images of mobile targets
US10281281B2 (en) * 2015-09-02 2019-05-07 The United States Of America As Represented By The Secretary Of The Navy Decision support and control systems including various graphical user interfaces configured for displaying multiple transit options for a platform with respect to hazard and objects and related methods
KR102445064B1 (en) 2015-09-03 2022-09-19 에이비 엘렉트로룩스 system of robot cleaning device
CN107209854A (en) 2015-09-15 2017-09-26 深圳市大疆创新科技有限公司 For the support system and method that smoothly target is followed
WO2017071143A1 (en) 2015-10-30 2017-05-04 SZ DJI Technology Co., Ltd. Systems and methods for uav path planning and control
US10231441B2 (en) 2015-09-24 2019-03-19 Digi-Star, Llc Agricultural drone for use in livestock feeding
US10321663B2 (en) 2015-09-24 2019-06-18 Digi-Star, Llc Agricultural drone for use in livestock monitoring
US10008123B2 (en) * 2015-10-20 2018-06-26 Skycatch, Inc. Generating a mission plan for capturing aerial images with an unmanned aerial vehicle
US9508263B1 (en) * 2015-10-20 2016-11-29 Skycatch, Inc. Generating a mission plan for capturing aerial images with an unmanned aerial vehicle
US10334050B2 (en) 2015-11-04 2019-06-25 Zoox, Inc. Software application and logic to modify configuration of an autonomous vehicle
US11283877B2 (en) 2015-11-04 2022-03-22 Zoox, Inc. Software application and logic to modify configuration of an autonomous vehicle
WO2017079341A2 (en) 2015-11-04 2017-05-11 Zoox, Inc. Automated extraction of semantic information to enhance incremental mapping modifications for robotic vehicles
US9606539B1 (en) 2015-11-04 2017-03-28 Zoox, Inc. Autonomous vehicle fleet service and system
US9632502B1 (en) 2015-11-04 2017-04-25 Zoox, Inc. Machine-learning systems and techniques to optimize teleoperation and/or planner decisions
US10401852B2 (en) 2015-11-04 2019-09-03 Zoox, Inc. Teleoperation system and method for trajectory modification of autonomous vehicles
US9754490B2 (en) 2015-11-04 2017-09-05 Zoox, Inc. Software application to request and control an autonomous vehicle service
US10248119B2 (en) 2015-11-04 2019-04-02 Zoox, Inc. Interactive autonomous vehicle command controller
US9612123B1 (en) 2015-11-04 2017-04-04 Zoox, Inc. Adaptive mapping to navigate autonomous vehicles responsive to physical environment changes
US9537914B1 (en) * 2015-12-01 2017-01-03 International Business Machines Corporation Vehicle domain multi-level parallel buffering and context-based streaming data pre-processing system
US10762795B2 (en) * 2016-02-08 2020-09-01 Skydio, Inc. Unmanned aerial vehicle privacy controls
EP3430424B1 (en) 2016-03-15 2021-07-21 Aktiebolaget Electrolux Robotic cleaning device and a method at the robotic cleaning device of performing cliff detection
US10152891B2 (en) * 2016-05-02 2018-12-11 Cnh Industrial America Llc System for avoiding collisions between autonomous vehicles conducting agricultural operations
WO2017192666A1 (en) * 2016-05-03 2017-11-09 Sunshine Aerial Systems, Inc. Autonomous aerial vehicle
WO2017194102A1 (en) 2016-05-11 2017-11-16 Aktiebolaget Electrolux Robotic cleaning device
US11468778B2 (en) 2016-06-10 2022-10-11 Metal Raptor, Llc Emergency shutdown and landing for passenger drones and unmanned aerial vehicles with air traffic control
US11436929B2 (en) 2016-06-10 2022-09-06 Metal Raptor, Llc Passenger drone switchover between wireless networks
US11488483B2 (en) 2016-06-10 2022-11-01 Metal Raptor, Llc Passenger drone collision avoidance via air traffic control over wireless network
US10789853B2 (en) * 2016-06-10 2020-09-29 ETAK Systems, LLC Drone collision avoidance via air traffic control over wireless networks
US11670179B2 (en) 2016-06-10 2023-06-06 Metal Raptor, Llc Managing detected obstructions in air traffic control systems for passenger drones
US11328613B2 (en) 2016-06-10 2022-05-10 Metal Raptor, Llc Waypoint directory in air traffic control systems for passenger drones and unmanned aerial vehicles
US11403956B2 (en) 2016-06-10 2022-08-02 Metal Raptor, Llc Air traffic control monitoring systems and methods for passenger drones
US9959772B2 (en) * 2016-06-10 2018-05-01 ETAK Systems, LLC Flying lane management systems and methods for unmanned aerial vehicles
US11670180B2 (en) 2016-06-10 2023-06-06 Metal Raptor, Llc Obstruction detection in air traffic control systems for passenger drones
US11710414B2 (en) 2016-06-10 2023-07-25 Metal Raptor, Llc Flying lane management systems and methods for passenger drones
US11341858B2 (en) 2016-06-10 2022-05-24 Metal Raptor, Llc Managing dynamic obstructions in air traffic control systems for passenger drones and unmanned aerial vehicles
CN115113645A (en) 2016-07-04 2022-09-27 深圳市大疆创新科技有限公司 Method for supporting aeronautical work
CN106303431A (en) * 2016-08-23 2017-01-04 江苏金晓电子信息股份有限公司 A kind of multi-machine interaction control method improving urban road monitor efficiency
CN107783555B (en) * 2016-08-29 2021-05-14 杭州海康机器人技术有限公司 Target positioning method, device and system based on unmanned aerial vehicle
MX2019002714A (en) * 2016-09-09 2019-10-02 Walmart Apollo Llc Geographic area monitoring systems and methods utilizing computational sharing across multiple unmanned vehicles.
GB2568006B (en) 2016-09-09 2019-09-18 Walmart Apollo Llc Systems and methods to interchangeably couple tool systems with unmanned vehicles
MX2019002716A (en) 2016-09-09 2019-09-23 Walmart Apollo Llc Geographic area monitoring systems and methods that balance power usage between multiple unmanned vehicles.
CA3035771A1 (en) 2016-09-09 2018-03-15 Walmart Apollo, Llc Geographic area monitoring systems and methods through interchanging tool systems between unmanned vehicles
US10955838B2 (en) * 2016-09-26 2021-03-23 Dji Technology, Inc. System and method for movable object control
IT201600101337A1 (en) * 2016-11-03 2018-05-03 Srsd Srl MOBILE TERRESTRIAL OR NAVAL SYSTEM, WITH REMOTE CONTROL AND CONTROL, WITH PASSIVE AND ACTIVE DEFENSES, EQUIPPED WITH SENSORS AND COMPLETE ACTUATORS CONTEMPORARY COVERAGE OF THE SURROUNDING SCENARIO
WO2018138584A1 (en) * 2017-01-26 2018-08-02 Mobileye Vision Technologies Ltd. Vehicle navigation based on aligned image and lidar information
CN108401449A (en) * 2017-03-31 2018-08-14 深圳市大疆创新科技有限公司 Flight simulation method, device based on Fusion and equipment
CN106843235B (en) * 2017-03-31 2019-04-12 深圳市靖洲科技有限公司 A kind of Artificial Potential Field path planning towards no person bicycle
US11862302B2 (en) 2017-04-24 2024-01-02 Teladoc Health, Inc. Automated transcription and documentation of tele-health encounters
CN114815863A (en) * 2017-04-27 2022-07-29 深圳市大疆创新科技有限公司 Control method and device of unmanned aerial vehicle and method and device for prompting obstacle
CN110621208A (en) 2017-06-02 2019-12-27 伊莱克斯公司 Method for detecting a height difference of a surface in front of a robotic cleaning device
WO2018218640A1 (en) * 2017-06-02 2018-12-06 SZ DJI Technology Co., Ltd. Systems and methods for multi-target tracking and autofocusing based on deep machine learning and laser radar
US10483007B2 (en) 2017-07-25 2019-11-19 Intouch Technologies, Inc. Modular telehealth cart with thermal imaging and touch screen user interface
CN107608372B (en) * 2017-08-14 2020-12-04 广西师范大学 Multi-unmanned aerial vehicle collaborative track planning method based on combination of improved RRT algorithm and improved PH curve
US11636944B2 (en) 2017-08-25 2023-04-25 Teladoc Health, Inc. Connectivity infrastructure for a telehealth platform
US11506498B2 (en) * 2017-08-31 2022-11-22 Saab Ab Method and a system for estimating the geographic position of a target
US10599138B2 (en) * 2017-09-08 2020-03-24 Aurora Flight Sciences Corporation Autonomous package delivery system
EP3687357A1 (en) 2017-09-26 2020-08-05 Aktiebolaget Electrolux Controlling movement of a robotic cleaning device
JP7067023B2 (en) * 2017-11-10 2022-05-16 富士通株式会社 Information processing device, background update method and background update program
US20210018319A1 (en) * 2018-03-26 2021-01-21 Nec Corporation Search support apparatus, search support method, and computer-readable recording medium
US10617299B2 (en) 2018-04-27 2020-04-14 Intouch Technologies, Inc. Telehealth cart that supports a removable tablet with seamless audio/video switching
US11292449B2 (en) * 2018-10-19 2022-04-05 GEOSAT Aerospace & Technology Unmanned ground vehicle and method for operating unmanned ground vehicle
CN114519917A (en) * 2018-10-29 2022-05-20 赫克斯冈技术中心 Mobile monitoring system
US11131992B2 (en) 2018-11-30 2021-09-28 Denso International America, Inc. Multi-level collaborative control system with dual neural network planning for autonomous vehicle control in a noisy environment
EP3690383A1 (en) 2019-02-04 2020-08-05 CMI Defence S.A. Operational section of armoured vehicles communicating with a flotilla of drones
CN109828596A (en) * 2019-02-28 2019-05-31 深圳市道通智能航空技术有限公司 A kind of method for tracking target, device and unmanned plane
CN110209126A (en) * 2019-03-21 2019-09-06 南京航空航天大学 The wheeled unmanned vehicle of modularization and rotor wing unmanned aerial vehicle fleet system
CN110398985B (en) * 2019-08-14 2022-11-11 北京信成未来科技有限公司 Distributed self-adaptive unmanned aerial vehicle measurement and control system and method
CN110531782A (en) * 2019-08-23 2019-12-03 西南交通大学 Unmanned aerial vehicle flight path paths planning method for community distribution
WO2021049227A1 (en) * 2019-09-13 2021-03-18 ソニー株式会社 Information processing system, information processing device, and information processing program
US11762094B2 (en) * 2020-03-05 2023-09-19 Uatc, Llc Systems and methods for object detection and motion prediction by fusing multiple sensor sweeps into a range view representation
KR20210127558A (en) * 2020-04-14 2021-10-22 한국전자통신연구원 Multi-agent based personal and robot collaboration system and method
CN112130581B (en) * 2020-08-19 2022-06-17 昆明理工大学 Unmanned aerial vehicle cluster cooperative task planning method for aerial maneuver battle
CN112987799B (en) * 2021-04-16 2022-04-05 电子科技大学 Unmanned aerial vehicle path planning method based on improved RRT algorithm
CN114449455A (en) * 2021-12-16 2022-05-06 珠海云洲智能科技股份有限公司 Integrated control system of wide area cluster task and wide area cluster system
CN114623816B (en) * 2022-02-16 2023-11-07 中国电子科技集团公司第十研究所 Method and device for tracking and maintaining airborne fusion information guided sensor

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5198607A (en) * 1992-02-18 1993-03-30 Trw Inc. Laser anti-missle defense system
US5643135A (en) * 1995-02-20 1997-07-01 Toyota Jidosha Kabushiki Kaisha Apparatus for controlling concurrent releasing and engaging actions of frictional coupling devices for shifting vehicle automatic transmission
US20040030571A1 (en) * 2002-04-22 2004-02-12 Neal Solomon System, method and apparatus for automated collective mobile robotic vehicles used in remote sensing surveillance
US20040030451A1 (en) * 2002-04-22 2004-02-12 Neal Solomon Methods and apparatus for decision making of system of mobile robotic vehicles
US20040167682A1 (en) * 2003-02-21 2004-08-26 Lockheed Martin Corporation Virtual sensor mast
US20050008155A1 (en) * 2003-07-08 2005-01-13 Pacific Microwave Research, Inc. Secure digital transmitter and method of operation
US20050122914A1 (en) * 2003-07-08 2005-06-09 Pacific Microwave Research, Inc. Secure Digital Communication System for High Multi-Path Environments
US6910657B2 (en) * 2003-05-30 2005-06-28 Raytheon Company System and method for locating a target and guiding a vehicle toward the target
US20050195096A1 (en) * 2004-03-05 2005-09-08 Ward Derek K. Rapid mobility analysis and vehicular route planning from overhead imagery
US20060085106A1 (en) * 2004-02-06 2006-04-20 Icosystem Corporation Methods and systems for area search using a plurality of unmanned vehicles
US20070168117A1 (en) * 2006-01-19 2007-07-19 Raytheon Company System and method for distributed engagement
US20080158256A1 (en) * 2006-06-26 2008-07-03 Lockheed Martin Corporation Method and system for providing a perspective view image by intelligent fusion of a plurality of sensor data
US7480395B2 (en) * 2002-06-06 2009-01-20 Techteam Government Solutions, Inc. Decentralized detection, localization, and tracking utilizing distributed sensors
US20090219393A1 (en) * 2008-02-29 2009-09-03 The Boeing Company Traffic and security monitoring system and method
US20090234499A1 (en) * 2008-03-13 2009-09-17 Battelle Energy Alliance, Llc System and method for seamless task-directed autonomy for robots
US20130103195A1 (en) * 2006-12-28 2013-04-25 Science Applications International Corporation Methods and Systems for An Autonomous Robotic Platform

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5198607A (en) * 1992-02-18 1993-03-30 Trw Inc. Laser anti-missle defense system
US5643135A (en) * 1995-02-20 1997-07-01 Toyota Jidosha Kabushiki Kaisha Apparatus for controlling concurrent releasing and engaging actions of frictional coupling devices for shifting vehicle automatic transmission
US20040030571A1 (en) * 2002-04-22 2004-02-12 Neal Solomon System, method and apparatus for automated collective mobile robotic vehicles used in remote sensing surveillance
US20040030451A1 (en) * 2002-04-22 2004-02-12 Neal Solomon Methods and apparatus for decision making of system of mobile robotic vehicles
US20040030450A1 (en) * 2002-04-22 2004-02-12 Neal Solomon System, methods and apparatus for implementing mobile robotic communication interface
US7480395B2 (en) * 2002-06-06 2009-01-20 Techteam Government Solutions, Inc. Decentralized detection, localization, and tracking utilizing distributed sensors
US20040167682A1 (en) * 2003-02-21 2004-08-26 Lockheed Martin Corporation Virtual sensor mast
US7149611B2 (en) * 2003-02-21 2006-12-12 Lockheed Martin Corporation Virtual sensor mast
US6910657B2 (en) * 2003-05-30 2005-06-28 Raytheon Company System and method for locating a target and guiding a vehicle toward the target
US20050122914A1 (en) * 2003-07-08 2005-06-09 Pacific Microwave Research, Inc. Secure Digital Communication System for High Multi-Path Environments
US20050008155A1 (en) * 2003-07-08 2005-01-13 Pacific Microwave Research, Inc. Secure digital transmitter and method of operation
US20060085106A1 (en) * 2004-02-06 2006-04-20 Icosystem Corporation Methods and systems for area search using a plurality of unmanned vehicles
US20050195096A1 (en) * 2004-03-05 2005-09-08 Ward Derek K. Rapid mobility analysis and vehicular route planning from overhead imagery
US20070168117A1 (en) * 2006-01-19 2007-07-19 Raytheon Company System and method for distributed engagement
US20080158256A1 (en) * 2006-06-26 2008-07-03 Lockheed Martin Corporation Method and system for providing a perspective view image by intelligent fusion of a plurality of sensor data
US20130103195A1 (en) * 2006-12-28 2013-04-25 Science Applications International Corporation Methods and Systems for An Autonomous Robotic Platform
US20090219393A1 (en) * 2008-02-29 2009-09-03 The Boeing Company Traffic and security monitoring system and method
US20090234499A1 (en) * 2008-03-13 2009-09-17 Battelle Energy Alliance, Llc System and method for seamless task-directed autonomy for robots

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150161484A1 (en) * 2007-05-25 2015-06-11 Definiens Ag Generating an Anatomical Model Using a Rule-Based Segmentation and Classification Process
US9092852B2 (en) * 2007-05-25 2015-07-28 Definiens Ag Generating an anatomical model using a rule-based segmentation and classification process
US20110029234A1 (en) * 2009-07-29 2011-02-03 Lockheed Martin Corporation Threat Analysis Toolkit
US9115996B2 (en) * 2009-07-29 2015-08-25 Lockheed Martin Corporation Threat analysis toolkit
US8935035B1 (en) * 2011-03-31 2015-01-13 The United States Of America As Represented By The Secretary Of The Army Advanced optimization framework for air-ground persistent surveillance using unmanned vehicles
US20130021475A1 (en) * 2011-07-21 2013-01-24 Canant Ross L Systems and methods for sensor control
US9713675B2 (en) 2012-07-17 2017-07-25 Elwha Llc Unmanned device interaction methods and systems
US9254363B2 (en) 2012-07-17 2016-02-09 Elwha Llc Unmanned device interaction methods and systems
US9125987B2 (en) * 2012-07-17 2015-09-08 Elwha Llc Unmanned device utilization methods and systems
US20140025235A1 (en) * 2012-07-17 2014-01-23 Elwha LLC, a limited liability company of the State of Delaware Unmanned device utilization methods and systems
US10019000B2 (en) 2012-07-17 2018-07-10 Elwha Llc Unmanned device utilization methods and systems
US9798325B2 (en) 2012-07-17 2017-10-24 Elwha Llc Unmanned device interaction methods and systems
US9733644B2 (en) 2012-07-17 2017-08-15 Elwha Llc Unmanned device interaction methods and systems
US9241858B2 (en) 2012-09-28 2016-01-26 Elwha Llc Automated systems, devices, and methods for transporting and supporting patients
US10274957B2 (en) 2012-09-28 2019-04-30 Elwha Llc Automated systems, devices, and methods for transporting and supporting patients
US9233039B2 (en) 2012-09-28 2016-01-12 Elwha Llc Automated systems, devices, and methods for transporting and supporting patients
US20140094997A1 (en) * 2012-09-28 2014-04-03 Elwha Llc Automated Systems, Devices, and Methods for Transporting and Supporting Patients Including Multi-Floor Operation
US20140094990A1 (en) * 2012-09-28 2014-04-03 Elwha Llc Automated Systems, Devices, and Methods for Transporting and Supporting Patients
US9220651B2 (en) * 2012-09-28 2015-12-29 Elwha Llc Automated systems, devices, and methods for transporting and supporting patients
US10241513B2 (en) 2012-09-28 2019-03-26 Elwha Llc Automated systems, devices, and methods for transporting and supporting patients
US8886383B2 (en) 2012-09-28 2014-11-11 Elwha Llc Automated systems, devices, and methods for transporting and supporting patients
US9465389B2 (en) 2012-09-28 2016-10-11 Elwha Llc Automated systems, devices, and methods for transporting and supporting patients
US9125779B2 (en) * 2012-09-28 2015-09-08 Elwha Llc Automated systems, devices, and methods for transporting and supporting patients
US20140098990A1 (en) * 2012-10-09 2014-04-10 The Boeing Company Distributed Position Identification
US9214021B2 (en) * 2012-10-09 2015-12-15 The Boeing Company Distributed position identification
US9875403B2 (en) * 2012-12-12 2018-01-23 Thales Method for accurately geolocating an image sensor installed on board an aircraft
US11334069B1 (en) * 2013-04-22 2022-05-17 National Technology & Engineering Solutions Of Sandia, Llc Systems, methods and computer program products for collaborative agent control
US11482114B2 (en) 2014-09-05 2022-10-25 Precision Hawk Usa Inc. Automated un-manned air traffic control system
US10665110B2 (en) 2014-09-05 2020-05-26 Precision Hawk Usa Inc. Automated un-manned air traffic control system
US9875657B2 (en) 2014-09-05 2018-01-23 Precision Hawk Usa Inc. Automated un-manned air traffic control system
WO2016093908A3 (en) * 2014-09-05 2016-08-04 Precisionhawk Usa Inc. Automated un-manned air traffic control system
US9454907B2 (en) * 2015-02-07 2016-09-27 Usman Hafeez System and method for placement of sensors through use of unmanned aerial vehicles
KR20160097873A (en) * 2015-02-10 2016-08-18 한화테크윈 주식회사 Monitoring and tracking system
KR101695254B1 (en) 2015-02-10 2017-01-12 한화테크윈 주식회사 Monitoring and tracking system
DE102015007156A1 (en) * 2015-06-03 2016-12-08 Audi Ag A method in which an unmanned aerial vehicle interacts with a motor vehicle and motor vehicle
DE102015007156B4 (en) * 2015-06-03 2020-11-19 Audi Ag Method in which an unmanned aerial vehicle interacts with a motor vehicle, and motor vehicle
US10805899B2 (en) 2015-12-18 2020-10-13 At&T Intellectual Property I, L.P. Location assessment system for drones
US10188580B2 (en) * 2016-05-09 2019-01-29 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for providing environment information using an unmanned vehicle
US11756307B2 (en) 2016-09-22 2023-09-12 Apple Inc. Vehicle video system
US11341752B2 (en) 2016-09-22 2022-05-24 Apple Inc. Vehicle video system
US20180082135A1 (en) * 2016-09-22 2018-03-22 Apple Inc. Vehicle Video System
US10810443B2 (en) * 2016-09-22 2020-10-20 Apple Inc. Vehicle video system
US11215998B2 (en) * 2016-12-21 2022-01-04 Vorwerk & Co. Interholding Gmbh Method for the navigation and self-localization of an autonomously moving processing device
US11500380B2 (en) 2017-02-10 2022-11-15 Nissan North America, Inc. Autonomous vehicle operational management including operating a partially observable Markov decision process model instance
US10654476B2 (en) 2017-02-10 2020-05-19 Nissan North America, Inc. Autonomous vehicle operational management control
US11113973B2 (en) 2017-02-10 2021-09-07 Nissan North America, Inc. Autonomous vehicle operational management blocking monitoring
CN107544551A (en) * 2017-09-01 2018-01-05 北方工业大学 Regional rapid logistics transportation method based on intelligent unmanned aerial vehicle
US10836405B2 (en) 2017-10-30 2020-11-17 Nissan North America, Inc. Continual planning and metareasoning for controlling an autonomous vehicle
WO2019089015A1 (en) * 2017-10-31 2019-05-09 Nissan North America, Inc. Autonomous vehicle operation with explicit occlusion reasoning
US11702070B2 (en) 2017-10-31 2023-07-18 Nissan North America, Inc. Autonomous vehicle operation with explicit occlusion reasoning
US11027751B2 (en) 2017-10-31 2021-06-08 Nissan North America, Inc. Reinforcement and model learning for vehicle operation
US11084504B2 (en) 2017-11-30 2021-08-10 Nissan North America, Inc. Autonomous vehicle operational management scenarios
US11874120B2 (en) 2017-12-22 2024-01-16 Nissan North America, Inc. Shared autonomous vehicle operational management
US11110941B2 (en) 2018-02-26 2021-09-07 Renault S.A.S. Centralized shared autonomous vehicle operational management
US11120688B2 (en) 2018-06-29 2021-09-14 Nissan North America, Inc. Orientation-adjust actions for autonomous vehicle operational management
AU2019301825B2 (en) * 2018-07-12 2022-09-22 TerraClear Inc. Object identification and collection system and method
US11710255B2 (en) 2018-07-12 2023-07-25 TerraClear Inc. Management and display of object-collection data
US11854226B2 (en) 2018-07-12 2023-12-26 TerraClear Inc. Object learning and identification using neural networks
EP3821313A4 (en) * 2018-07-12 2022-07-27 Terraclear Inc. Object identification and collection system and method
US11900786B1 (en) 2018-10-31 2024-02-13 United Services Automobile Association (Usaa) Electrical power outage detection system
US11854262B1 (en) * 2018-10-31 2023-12-26 United Services Automobile Association (Usaa) Post-disaster conditions monitoring system using drones
US11789003B1 (en) 2018-10-31 2023-10-17 United Services Automobile Association (Usaa) Water contamination detection system
CN109445444A (en) * 2018-12-25 2019-03-08 同济大学 A kind of barrier concentrates the robot path generation method under environment
WO2021090312A3 (en) * 2019-11-06 2021-06-17 Israel Aerospace Industries Ltd. Line of sight maintenance during object tracking
CN111078250A (en) * 2019-11-14 2020-04-28 新石器慧通(北京)科技有限公司 Equipment firmware upgrading method and system of movable carrier and unmanned vehicle
US11635758B2 (en) 2019-11-26 2023-04-25 Nissan North America, Inc. Risk aware executor with action set recommendations
US11899454B2 (en) 2019-11-26 2024-02-13 Nissan North America, Inc. Objective-based reasoning in autonomous vehicle decision-making
US11613269B2 (en) 2019-12-23 2023-03-28 Nissan North America, Inc. Learning safety and human-centered constraints in autonomous vehicles
US11300957B2 (en) 2019-12-26 2022-04-12 Nissan North America, Inc. Multiple objective explanation and control interface design
US11714971B2 (en) 2020-01-31 2023-08-01 Nissan North America, Inc. Explainability of autonomous vehicle decision making
US11577746B2 (en) 2020-01-31 2023-02-14 Nissan North America, Inc. Explainability of autonomous vehicle decision making
US11782438B2 (en) 2020-03-17 2023-10-10 Nissan North America, Inc. Apparatus and method for post-processing a decision-making model of an autonomous vehicle using multivariate data
US11958183B2 (en) 2020-09-18 2024-04-16 The Research Foundation For The State University Of New York Negotiation-based human-robot collaboration via augmented reality
WO2023204821A1 (en) * 2022-04-22 2023-10-26 Electric Sheep Robotics, Inc. Navigating an unmanned ground vehicle

Also Published As

Publication number Publication date
US8244469B2 (en) 2012-08-14
US20100017046A1 (en) 2010-01-21

Similar Documents

Publication Publication Date Title
US8244469B2 (en) Collaborative engagement for target identification and tracking
Price et al. Deep neural network-based cooperative visual tracking through multiple micro aerial vehicles
Yu et al. Cooperative path planning for target tracking in urban environments using unmanned air and ground vehicles
EP3619591B1 (en) Leading drone
Stentz et al. Integrated air/ground vehicle system for semi-autonomous off-road navigation
EP3619584B1 (en) Underwater leading drone system
US9746330B2 (en) System and method for localizing two or more moving nodes
US20180321681A1 (en) Leading drone method
US11726501B2 (en) System and method for perceptive navigation of automated vehicles
US10037041B2 (en) System and apparatus for integrating mobile sensor platforms into autonomous vehicle operational control
Butzke et al. The University of Pennsylvania MAGIC 2010 multi‐robot unmanned vehicle system
Çaşka et al. A survey of UAV/UGV collaborative systems
Arola et al. UAV pursuit-evasion using deep learning and search area proposal
Martinez-Rozas et al. Skyeye team at MBZIRC 2020: A team of aerial and ground robots for GPS-denied autonomous fire extinguishing in an urban building scenario
Cheung et al. UAV-UGV Collaboration with a PackBot UGV and Raven SUAV for Pursuit and Tracking of a Dynamic Target
Kamat et al. A survey on autonomous navigation techniques
Melin et al. Cooperative sensing and path planning in a multi-vehicle environment
Dille et al. Air-ground collaborative surveillance with human-portable hardware
Martínez-Rozas et al. An aerial/ground robot team for autonomous firefighting in urban GNSS-denied scenarios
US20240126294A1 (en) System and method for perceptive navigation of automated vehicles
Pippin et al. The design of an air-ground research platform for cooperative surveillance
Maharajan et al. Using AI to Improve Autonomous Unmanned Aerial Vehicle Navigation
Yıldırım RELATIVE LOCALIZATION AND COORDINATION FOR AIR-GROUND ROBOT TEAMS
Mattison et al. An autonomous ground explorer utilizing a vision-based approach to indoor navigation
Talwandi et al. An Automatic Navigation System for New Technical Advanced Drones for Different Alpplications

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: ENDEAVOR ROBOTICS, INC., MASSACHUSETTS

Free format text: CHANGE OF NAME;ASSIGNOR:IROBOT DEFENSE HOLDINGS, INC.;REEL/FRAME:049837/0810

Effective date: 20181011

AS Assignment

Owner name: FLIR DETECTION, INC., OKLAHOMA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ENDEAVOR ROBOTICS, INC.;REEL/FRAME:049244/0515

Effective date: 20190325