US20070238073A1 - Projectile targeting analysis - Google Patents

Projectile targeting analysis Download PDF

Info

Publication number
US20070238073A1
US20070238073A1 US11/398,400 US39840006A US2007238073A1 US 20070238073 A1 US20070238073 A1 US 20070238073A1 US 39840006 A US39840006 A US 39840006A US 2007238073 A1 US2007238073 A1 US 2007238073A1
Authority
US
United States
Prior art keywords
emitter
target
camera
projectile launcher
aimpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/398,400
Inventor
Rocco Portoghese
Richard Hebb
Edward Purvis
James Purvis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GOVERMMENT OF UNITED STATES SECRETARY OF NAVY
US Department of Navy
Original Assignee
US Department of Navy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by US Department of Navy filed Critical US Department of Navy
Priority to US11/398,400 priority Critical patent/US20070238073A1/en
Assigned to GOVERMMENT OF THE UNITED STATES, SECRETARY OF THE NAVY reassignment GOVERMMENT OF THE UNITED STATES, SECRETARY OF THE NAVY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEBB, RICHARD CHRISTOPHER, PORTOGHESE, ROCCO, PURVIS, EDWARD JOHN
Publication of US20070238073A1 publication Critical patent/US20070238073A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G3/00Aiming or laying means
    • F41G3/32Devices for testing or checking
    • F41G3/323Devices for testing or checking for checking the angle between the muzzle axis of the gun and a reference axis, e.g. the axis of the associated sighting device
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G3/00Aiming or laying means
    • F41G3/26Teaching or practice apparatus for gun-aiming or gun-laying
    • F41G3/2605Teaching or practice apparatus for gun-aiming or gun-laying using a view recording device cosighted with the gun
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G3/00Aiming or laying means
    • F41G3/26Teaching or practice apparatus for gun-aiming or gun-laying
    • F41G3/2616Teaching or practice apparatus for gun-aiming or gun-laying using a light emitting device
    • F41G3/2622Teaching or practice apparatus for gun-aiming or gun-laying using a light emitting device for simulating the firing of a gun or the trajectory of a projectile
    • F41G3/2661Teaching or practice apparatus for gun-aiming or gun-laying using a light emitting device for simulating the firing of a gun or the trajectory of a projectile in which the light beam is sent from the target to the weapon
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • G01S5/163Determination of attitude

Definitions

  • This invention relates generally to projectile targeting and more particularly to small projectile trajectory analysis.
  • Aimpoint is used in actual field situations to determine a fire control solution. Aimpoint is also used in training weapons operators to sharpen their skills and to improve their performance.
  • U.S. Pat. Nos. 4,804,325 and 5,213,503 each of which is incorporated herein by reference illustrate representative relevant techniques in the prior art.
  • U.S. Pat. No. 4,804,325 discloses a weapons training simulator that predicts aimpoint and trajectory.
  • the invention uses a sensor mounted on a weapon to generate a target position signal based on point sources located within the perimeter of simulated targets, thereby defining a diffusely illuminated target field.
  • the aimpoint is determined, using the output from light emitting diodes (LED's), defining point sources to a quadrature detector array to create uniform diffuse sources.
  • LED's light emitting diodes
  • U.S. Pat. No. 5,213,503 discloses a commercially available aimpoint infrared spot tracking system that includes a charge coupled device (CCD) video camera interfaced to a digital frame grabber operating at standard video rates for use in simulator training.
  • a lens system images the tracking area (i.e., video projection screen) onto the CCD imaging sensor.
  • the frame grabber digitizes each frame of video data collected by the CCD camera. This data is further processed with digital signal processing hardware as well as software algorithms to find position coordinates of the imaged IR spot.
  • the 503′ patent uses tracking system software to allow the aimpoint to be continuously monitored during a training scenario.
  • a CCD-based tracking system or similar device utilizing a two-dimensional position sensing detector (PSD) lateral-effect photodiode provides the aimpoint position data.
  • PSD position sensing detector
  • the aimpoint analysis of the 503′ patent is limited to a virtual environment with all targets displayed upon a video projection screen.
  • an infrared source is mounted to the weapon and the beam from the infrared source-is projected onto the screen.
  • Live testing is generally performed to determine the path of projectiles by physically tracking the projectile path (hereafter the “fall of shot”) from the nozzle exit to the actual end-point of the projectile path, which could be the location at detonation.
  • aimpoint analysis is the focus of testing when experimenting with new weapon sights or fire control systems.
  • weapon and projectile launcher are used interchangeably.
  • an aimpoint tracking and data collection system for projectiles during live fire testing or training events.
  • the exemplary embodiment uses infrared reference emitters and a projectile launcher-mounted camera system to measure aimpoint relative to predefined targets on a live testing or training range. Calculated aimpoint angles may be used in conjunction with a ballistics model to predict a projectile's miss distance at the plane of the intended target. In addition, if the projectile is from a weapon then the effects may be incorporated to predict the impact.
  • the system according to the first aspect may further include moving targets.
  • the system according to the first aspect may further include a computer program for performing coordination and control of the equipment and calculations.
  • the calculation may further include the effects of external conditions such as wind on the trajectory.
  • a method for determining projectile launcher aimpoint comprises surveying a firing range for determining the location of target and emitter coordinates, calculating the centroid of a target having a defined perimeter, the target being placed on the firing range, placing fixed infrared emitting sensors on the firing range wherein the sensors are located external to the target perimeter, selecting the location of the shooter at a firing line, mounting a camera to the projectile launcher, calibrating the optical axis of a camera with the boresight of the projectile launcher, mapping the camera with the target and emitter position, providing equipment means for controlling the emitter and for determining the predicted aimpoint at a triggering event, determining the projectile launcher aimpoint at the triggering event, and outputting the measuring projectile launcher aimpoint at the triggering event.
  • FIG. 1 is a perspective view of an exemplary embodiment illustrating the location of components of the present invention
  • FIG. 2 is a signal flowchart of an exemplary embodiment of the present invention
  • FIG. 3 is a signal flowchart for the emitter controller of the present invention.
  • FIG. 4 is a signal flowchart for the downrange controller of the present invention.
  • FIG. 5 is a signal flowchart for the system interface box of the present invention.
  • FIG. 6 is a flow diagram of the software for controlling the illustrative embodiment of the present invention.
  • FIG. 7 is a software program flowchart of the System Interface Box (SIB);
  • FIG. 8 is a software program flowchart of the Emitter Controller (EC).
  • FIGS. 9 a , 9 b , and 9 c , sheets 1 , 2 , and 3 respectively are a flow diagram of the software for controlling the illustrative embodiment of the present invention.
  • FIG. 10 is a plan view illustrating a firing range showing the mathematical relationships for fixed targets, emitters and a projectile launcher, and includes the boresighting relationship between the projectile launcher aimpoint and the camera optical axis;
  • FIG. 11 is a plan view of a firing range showing the sightline and the sightline intersection with the moving target path plane indicating the mathematical relationship between the sightline and the moving target at shot fired time.
  • FIG. 12 is a plan view illustrating a track mounted moving target with attached emitter illustrating the target, shot, and round trajectories, along with equations for calculating the coordinates at the time of the round intercept with the target path plane.
  • a projectile launcher aimpoint tracking system is shown generally at 10 .
  • a camera 12 is affixed to a mount 14 , and the mount is affixed to a projectile launcher such as a rifle 16 .
  • Stationary targets 18 or moving targets 20 including both stationary and moving infrared emitters 24 , respectively along with emitter controllers 22 are located in a firing range.
  • a firing range is a volume where targets are located and can include an entire battle theatre.
  • Infrared emitters 24 typically LED's or optically filtered halogen lamps, are located downrange as references for the determination of target location.
  • the associated emitter controller 22 controls each infrared emitter and receives signals from the system interface box 28 or a downrange controller 25 which communicates with the emitter controller by radio transmission.
  • a computer 26 provides the overall control of the system with interfaces provided though a system interface box 28 , each being mounted in a rack 30 .
  • Radio transmitters 32 , relays 34 and receivers 36 along with connecting cables 38 provide communication connections between system components.
  • Sensors that detect projectile launcher related events include one or more of a microphone 40 a , a powered sensor 40 b , or a normally open switch 40 c to provide input to the system interface box 28 .
  • a video recorder 42 for example a videocassette recorder, digital recorder, digital compact disk recorder, etc., receives and stores the camera video and event information, including information from the trigger sensors.
  • the projectile launcher mounted camera 12 views the live-fire range from the perspective of the projectile launcher bore sight.
  • Both the camera and its lens 44 are removable and may be replaced with a different camera and or different lens for each test depending upon the test objectives.
  • Typical variables for selecting the camera and lens are the camera size, resolution, required field-of-view, and mounting requirements.
  • small-bore projectile launchers require a small, light camera to minimize interference with operator control and a narrow field-of-view to maximize angular resolution per video pixel.
  • a wider field-of-view is required if the targets are dispersed over a wide firing range, such as battle theatre, for example 30 degrees as opposed to a narrow target range, for example 8 degrees.
  • the camera 12 is an ELMO CCD (charged coupled device) Black and White Camera, model ME4111R that does not have the typical near-infrared blocking filter installed (although other suitable cameras are commercially available, for example, a small rectangular 1 ⁇ 2′′ format camera with an electronic high speed shutter manufactured by Panasonic that can have its near-infrared blocking filter removed).
  • the image pick-up device is a 1 ⁇ 2 inch interline-transfer CCD with 768 H ⁇ 494 V effective picture elements (pixels).
  • the camera typically consists of a camera head with lens, a camera control unit (CCU) and an interconnecting cable.
  • the CCU is powered by 12 VDC from either a battery or an AC adapter.
  • the camera is operated in the standard NTSC mode with a vertical frequency (field) of 59.94 Hz, horizontal frequency (line) of 15.734 kHz and provides a composite video output.
  • the Elmo camera uses an Elmo lens mounting and two lenses, having focal lengths of 24 mm and 36 mm.
  • the Panasonic camera uses standard C mount lenses and a single 35 mm lens along with two focal length adapters to provide a 1.5 ⁇ and or 2 ⁇ focal length products.
  • the 35 mm lens can be modified to provide 52.5, 70, and 105 mm focal lengths by using combinations of the adapters.
  • the camera mount 14 is adjustable permitting the camera lens optical axis to be aligned roughly with the projectile launcher bore sight line.
  • a software managed boresight process is used for fine aligning the angular offset of the camera optical axis from the projectile launcher boresight line.
  • the remaining angular and linear camera axis offset from the sightline is recorded and used in subsequent projectile launcher aimpoint error calculations.
  • the angular and linear offsets of the camera optical axis from the sightline is corrected via the software calculations to remove effects upon the aimpoint measurement accuracy thereby eliminating the need for optical corrective algorithms.
  • the field coverage must provide for the camera 12 imaging of a target's reference emitter 24 when the projectile launcher 16 is sighted on a target 18 , 20 .
  • This requirement originates from the need to mark the reference emitter when a projectile is fired at the target. From this field of coverage it is to be further appreciated that the field coverage is optimally between 10 meters and 100 meters at the target distance thereby permitting location of the emitter outside of the perimeter of the target so long as the emitter is within the field of camera coverage.
  • a titler 46 located in the operator station main rack 30 , overlays text onto the incoming camera video and transmits the result to the VCR 42 .
  • the titler is responsive to the system computer 26 through an RS232 interface and provides output signals to the system computer through the RS232 interface and to the VCR through an RS170 video interface.
  • the system computer provides commands to the titter to overlay test data onto the camera video for correlation to events of interest occurring during the test.
  • the digital video recorder 42 is a typical, commercially available cassette unit, but may alternately be a digital recorder that utilizes other media such as digital disks or internal hard disks.
  • the digital recorder is mounted in the operator station main rack 30 .
  • the recorder records the video from the camera 12 , the titler 46 , as well as the main and auxiliary trigger events from the system interface box 28 on the left and right audio analog channels. This input is first transmitted to a tone generator 50 for the trigger and then to the videocassette recorder 42 .
  • the near-infrared CCD image of the live fire range is also output from the video recorder 42 via an RS170 video interface to the system computer's frame grabber board for marking the emitter centroid relative to the projectile launcher sightline.
  • the marking of the emitter can be performed immediately after an event. Alternately, the marking of the emitter may be performed during post-test processing, where the videocassette recorder provides recorded event data and test video to the system computer and the computer's frame grabber board to be described hereinbelow.
  • Microphone sensor 40 a inputs representing the trigger events recorded on the left and right audio channels are received by the VCR.
  • the sensor signals 40 a , 40 b , 40 c are also outputted to the system interface box 28 main and auxiliary audio inputs for trigger event re-generation.
  • the frame grabber (not shown) is a commercially available Matrox Orion board that decodes the composite video from the CCD camera 12 , described hereinabove, and digitizes the image to a 640H by 480V pixel format. All measurements of the emitter 24 centroid are made on video frames digitized by the frame grabber.
  • the 10-nanosecond jitter of the Matrox Orion board which equates to less than ten percent of a pixel width, has minimal effect on the emitter centroid measurement and the resulting system error.
  • the targets 18 , 20 are those typically used for live fire ranges.
  • the targets previously used have been E-silhouettes. Other targets may be used; the only consideration is that there is a defined target centroid.
  • the targets are attached to supports that hold the targets in position.
  • both fixed targets 18 and moving targets 20 are illustrated in the exemplary embodiment.
  • Each moving target is mounted on a track 52 and responds to signals from the system computer 26 to traverse a portion of the firing range through a preprogrammed path when commanded by the computer.
  • a dedicated preprogrammed target motion controller (not shown) can control each target using servomotors.
  • the target motion controller is able to respond to contemporaneous operator commands using servomotors. Mounting the moving targets on tracks simplifies the calculation of location within the firing range by the downrange controller (to be described hereinafter).
  • the location of the moving target is transmitted through alternate means, for example a multiplexed optical signal from a rangefinder or a global positioning system.
  • the exemplary embodiment includes the implementation of a moving target 20 measurement subsystem.
  • the goal of the moving target measurement subsystem is to provide the 3D coordinates of the moving target and emitter centroids at the time that a shot is fired (to determine the projectile launcher sightline angles relative to the target center) and at the time of the terminal position of the fired projectile (where it intersects the target path plane, hits the ground, or air bursts) to differentiate the location of the moving target and the projectile at the time the projectile reaches its terminus or intersects the target.
  • the location of the moving targets' centroid like the fixed targets' centroid, is determined with reference to the infrared emitters' centroid.
  • the infrared emitters 24 are located on the firing range and are used in conjunction with the projectile launcher-mounted CCD camera 12 to determine where a projectile launcher 16 is aimed relative to targets 18 , 20 located on the range.
  • the emitters for fixed targets 18 are mounted to upright posts 54 that have been driven into the ground to rigidly fix the emitter position.
  • the ability to support live fire operations is provided by placing the emitters to one side of the target to minimize the possibility that an emitter will be struck by a projectile.
  • the emitters are located on the range such that the camera 12 can sense the emitter during the projectile launcher aiming process.
  • the camera does not need to sense the target so long as the emitter associated with the target under fire is in the camera's field of view and is identified as the target reference emitter in the system computer program software.
  • the system computer 26 defines the target's reference emitter 24 via a target-emitter database.
  • the emitters 24 are halogen light sources that are optically filtered using an RG780, or generally equivalent, filter to block visible output while allowing near-infrared output to pass through.
  • Near infrared is commonly defined as the region of the electromagnetic spectrum wavelength between 0.77 to 1.4 microns.
  • the RG 780 Filter makes the emitter output invisible to a shooter, but provides near infrared for the CCD camera 12 to detect.
  • the CCD camera is also optically filtered, preferably using an RT830 filter (or general equivalent) to optimize detection of near-infrared output.
  • the filter blocks most of the visible wavelengths but passes near-infrared radiation that corresponds to the emitter output and is within the CCD's near-infrared response.
  • the diameter of the infrared emitter 24 aperture is typically 50 millimeters, with smaller or larger sizes used for nearer or farther emitter positions.
  • the emitters used in the illustrated embodiment have angular output profiles that range from a 10-degree symmetrical cone to a 45-degree horizontal by 10-degree vertical compressed cone shape.
  • the calculated angular, size of the emitter presented to the CCD camera 12 varies as a function of distance from the camera.
  • the emitter subtends a relatively small angle at the 50-meter range and the angle reduces linearly as the range increases.
  • 50 meters is the closest planned range to an emitter, while 1000 meters is the maximum range to an emitter although closer and further ranges are within the contemplation of the illustrated embodiment.
  • an emitter placed at a further range will present itself as a smaller object in the CCD camera image and would allow a more precise location to be derived from the emitter marking/location process. If an emitter subtends less than a pixel when imaged on the CCD camera, marking the emitter centroid will be difficult due to an inability to determine the emitter shape. However, the image of the emitter sensed by the CCD sensor is actually larger, due to a phenomenon known as blooming. Blooming will become important as the angular size of an emitter aperture falls below the angle covered by a pixel.
  • the physical size of the emitter 24 aperture does not provide a direct relationship to angular size in the CCD camera 12 image.
  • high output emitters are used to provide sufficient near-infrared output to overcome ambient radiation from the sun. This high output increases blooming.
  • captured images of the emitter at various ranges show larger angular sizes depending on the ambient illumination level and the emitter range.
  • IR emitters 24 are commonly available along with power sources.
  • a portable 12V power source may be used, for example a battery (not shown).
  • a DC generator or powered inverter may also used to power the emitters.
  • An emitter controller 22 controls each emitter 24 .
  • the emitter controller communicates with the computer using a broadcast radio antenna 32 and receiver antenna 36 although other types of communication links, for example microwave, wide area wireless network, telephonic network etc. are well known in the art.
  • the emitter controller includes a Zworld BL1800 microprocessor 56 .
  • a 3.686 kHz resettable TX/RX clock 58 provides timing to the BL1800 microprocessor upon initiation of an enable signal from the microprocessor.
  • a radio transceiver 60 connected to the microprocessor handles radio communications.
  • the transceiver is a TEC T400 as is well known to those of skill in the art.
  • the microprocessor sends a signal to the emitter power control circuit 64 to turn the emitter on and off.
  • An ID select circuit 66 is provided to enable each emitter controller to be set to a unique ID, allowing radio broadcast commands to be addressed to specific controllers.
  • a battery level detector circuit 68 is provided to monitor the battery charge state.
  • Auxiliary circuits provide status LEDs and an override switch 70 .
  • the override switch is provided to allow a user at the emitter controller to cause the controller to power its attached emitter without recourse to radio commands, a feature useful during system
  • the emitters 24 are placed in relation to the firing range to provide a coordinate reference.
  • the need to locate the IR's within close proximity to or at the fixed targets 18 is eliminated so long as the distance vector from the emitter to the target is known.
  • the IR's are therefore generally located external to targets within the firing range but can also be located within the target boundary.
  • the downrange placement of the IR's takes advantage of the high resolution of the camera 12 in order to render the optic and alignment errors negligible for determination of the aimpoint position vector.
  • the Lens Mapping Procedure within the system computer software to be described hereinbelow is used to provide the microRadiansperPixel_H and microRadiansperPixel_V values associated with a particular CCD camera 12 and lens 44 combination that will be mounted to a projectile launcher 16 under test.
  • the downrange controller 22 consists of a Zworld BL1800 microprocessor 80 , a RIEGL LD90-3100-VLS-FLP rangefinder 82 , a TEC T400 radio transceiver 96 , and a custom circuit board containing an array of control and interface circuits.
  • the transceiver controls radio communications.
  • a 3.686 kHz resettable TX/RX clock circuit 86 provides timing for radio communications.
  • the SIB and all DRC's are tied together through a hardwired dual-channel current loop and RS485 channel. The two channels of the current loop are used by the SIB to send time critical signals, while the RS485 serial channel is used for bulk data transfer.
  • All hardwired connections to the DRC are optoisolated, protecting the DRC from potentially damaging transient voltages.
  • the current loop channels are isolated by the custom isolation circuit 94 .
  • the RS485 serial channel is isolated by a B&B Electronics 485OPIN optoisolator 102 .
  • Either the SIB, through the current loop, or the DRC's internal microprocessor 80 may enable the 2 kHz Laser Trigger Clock 84 .
  • the Laser Trigger Clock in turn causes the range finder 82 to gather range data at 2 kHz. Data from the rangefinder is downloaded to the microprocessor for analysis and storage through a RS232 channel.
  • An ID select circuit 100 provides each DRC with a unique address.
  • a downrange controller 25 is positioned at the end of each moving target's track 52 in order to monitor the location of the moving target 20 .
  • the DRC's rangefinder takes 2000 range samples per second.
  • the DRC's microprocessor averages groups of ten samples so that range data is recorded 200 times per second. Repetitive samples are discarded so that only motion is recorded.
  • Each controller is provided with capacity to store up to 60 seconds of motion.
  • the present embodiment has a capacity for 16 individually addressed downrange controllers, although it is within the contemplation of the present embodiment to include additional controllers.
  • a DC supply for example a battery (not shown), powers each controller 25 , although it is within the contemplation of the invention to use other power sources including inverters, fuel cells and DC generators.
  • trigger event sensors 40 a , 40 b , 40 c are used to initiate the aimpoint vector determination sequence.
  • Three types of sensors are available for the initiation function, and include a microphone, a powered sensor, and a switch.
  • Each trigger event sensor provides output to the system interface box SIB 28 and to the VCR 42 .
  • the microphone 40 a generates a trigger event by detecting fire by listening for projectile launcher recoil and outputs an analog signal.
  • the microphone is a standard commercial microphone having a sensitivity of 6 dB although higher and lower sensitivities will function in the embodiment.
  • the microphone is mounted on, or near, the projectile launcher such that the diaphragm of the microphone responds to the initiating sound of the projectile launcher firing.
  • the powered sensor 40 b is any digital sensor that requires external power.
  • the power sensor is provided with +5VDC from the system interface box (SIB) 28 .
  • SIB system interface box
  • the SIB generates a trigger event when the powered sensor digital output goes high.
  • Powered sensors include Hall Effect sensors to detect the motion of some projectile launcher part or an accelerometer to detect the shock of the projectile launcher recoil.
  • the powered sensor is mounted on the projectile launcher such that the sensor is responsive to the movement or acceleration of the sensed component.
  • the switch 40 c is a normally open switch that completes a circuit when closed.
  • the SIB 28 generates a trigger event when the switch is closed or pressed.
  • the switch is used to detect a preselected physical projectile launcher operation, such as a trigger pull or a button press, or may be a hand switch held by the operator, test director or system operator.
  • the associated output signal is sent to a glitch detector with lockout timer 110 .
  • the lockout timer provides a period after each trigger event during which further trigger events are ignored.
  • the glitch detector output is sent to the Zworld BL 1800 microprocessor 112 within the SIB 28 , alerting the microprocessor of the trigger event.
  • the glitch detector output is also sent to a trigger tone generator 50 .
  • the trigger tone generator produces a short burst of audio line level tone that is recorded by the system video recorder 42 , marking the trigger event on the video recording.
  • the BL 1800 microprocessor 112 of the SIB 28 serves as a general interface between the downrange and firing line hardware, including the emitter controllers 22 and downrange controllers 32 , the trigger event sensors 40 a , 40 b , 40 c , the VCR 42 and the system computer 26 .
  • the TEC T400 radio transceiver 120 handles radio communications.
  • the SIB 28 communicates to the system computer 26 through three channels: RS232 serial, parallel digital, and RS485 serial.
  • General communication between the SIB and the system computer are passed through the RS232 serial channel.
  • the parallel digital signals are used to transmit time critical communication.
  • the digital signals from the system computer are received by the SIB through a Data Translations STP68 interface board 124 .
  • Signal conditioning circuitry 122 turns two of the digital signals into the dual-channel current loop that joins the RS485 serial channel in hardwire linking the SIB and all of the DRC's.
  • the current loops transmit time-critical events to the DRC's while the RS485 serial channel allows for bulk data transfer.
  • a Keypad 114 and a Matrix Orbital LK2204-25 LCD display 116 allow the user to interact with the SIB, monitor its status, issue commands, and run test functions.
  • the system computer 26 is a typical commercially available PC capable of running software embodied on the computer media.
  • the computer includes a Windows 2000TM or higher operating system.
  • PC requirements in the exemplary embodiment are at least 128 MB of RAM and a 733 MHz Pentium III processor or equivalent although the system will run on other platforms.
  • a CDROM drive or equivalent means, for example, an external memory device or Internet download capability is required for software installation.
  • the PC contains a memory device such as a hard disk, a microprocessor and input device and output devices.
  • the output devices include both a display screen and a printer.
  • the memory device for example the hard disk, preferably should have a capacity of at least 10 GB free hard drive space for program operation and data storage.
  • the memory requirement may vary and depends upon the expected size of the database. As is well known in the art, memory can be selected to match the database or increased by installing a higher capacity disk drive.
  • the software for controlling the equipment that comprises the illustrative embodiment will now be described. It is to be appreciated that the software can be expressed in many forms by those skilled in the art and only the necessary functions will be described herein.
  • the software provides control among downrange hardware, operation station hardware and firing line hardware.
  • the compiled code in the Zworld BL 1800 microprocessor 112 automatically initiates at system power up 150 .
  • initialization 152 variables and arrays are created and the digital I/O system is initialized with outputs set to their default startup states.
  • the main loop is started 154 and repeats until the system is powered down.
  • the main and auxiliary trigger pulse digital inputs are polled 156 to check for hardware detection of projectile launcher events.
  • the system computer is alerted 158 to a hardware detected projectile launcher event by raising the trigger flag. Then the main and auxiliary triggers reset digital inputs are polled 160 to determine if the system computer commands the main or auxiliary trigger flags to be lowered 162 .
  • the main or auxiliary trigger flags are lowered as commanded by the system computer.
  • the RS232 serial buffer serving communications to the LCD display is polled 164 to determine if a keypad key press has been received.
  • the program responds appropriately to the key press.
  • the RS232 serial buffer serving communications to the system computer is then polled 168 to determine if a computer command has been received.
  • the computer determines whether a ping is received. All commands from the system computer other than a ping are meant for the emitter controllers via the radio link and the command bytes are placed in the outgoing queue 174 .
  • the program determines if there are any computer commands waiting to be sent to the emitter controllers via the radio link.
  • the SIB radio transceiver's received signal strength indicator line is polled 178 to determine if another unit's transceiver is currently transmitting.
  • all computer command bytes in the outgoing queue are sent to the emitter controllers via the radio transceiver 180 .
  • the bytes from the outgoing queue are cleared after being sent 182 .
  • the SIB radio transceiver's received signal strength indicator is polled to determine if another unit's transceiver is currently transmitting 184 . If another unit is transmitting, incoming radio messages are received 186 and the system responds as appropriate 188 to the received radio message.
  • the main loop ends 190 and repeats in the start step 154 .
  • the emitter controller program will now be described.
  • the compiled code resident in the emitter Zworld BL1800 microcontroller 56 (See FIG. 3 ) is initiated upon powerup 202 .
  • variables and arrays are created and the digital I/O system is initialized with outputs set to their default startup states.
  • the two digital inputs of the EC's 2-position DIP switch are polled to determine the controller's ID tens digit, and the four digital inputs of the 10-position rotary switch are polled to determine the controller's ID ones digit 206 .
  • the status LEDs are flashed to indicate the emitter identification number; the LEDs are cycled a number of times equal to the tens digit, then flashed a number of times equal to the ones digit 208 .
  • the main loop starts 210 and repeats until power down.
  • the program queries whether there are any bytes waiting to be sent to the system computer via radio link 212 .
  • the EC settings are queried to determine if radio responses have been disabled by system computer command. Radio responses may be disabled to prevent confused radio traffic when many EC's are deployed.
  • the program pauses for a duration determined by the emitter ID number 216 , 218 , 220 . This pause will prevent the EC's from attempting to transmit simultaneously, as well as causing them to respond to broadcast commands in numerical order.
  • the EC radio transceiver's received signal strength indicator line is polled to determine if another unit's transceiver is currently transmitting.
  • the waiting messages in the outgoing queue are broadcast via the EC's radio transceiver 222 .
  • the outgoing queue is cleared 224 after the waiting messages are sent.
  • the EC's set ID is checked 226 .
  • the EC determines its battery state 228 through two onboard voltage comparators, dividing the range of battery voltages into three categories: good, warn and critical. If manual override functionality has not been disabled by the system computer 230 the manual override switch state is polled and if “on” the manual override flag is set 232 .
  • the EC then polls the radio transceiver's RSSI line to determine if another unit's transceiver is currently transmitting 234 .
  • steps 236 through 240 incoming bytes are received and the message transmitted by the bytes is handled. Appropriate responses, if any, are placed in the outgoing queue.
  • the program polls whether the emitter manual override flag has been raised 242 . If the flag is raised then the emitter is turned on 244 and the emitter status LED is flashed steadily 246 , indicating that the emitter is on due to manual override, and the main loop ends. If the manual override flag is not raised, the program polls whether the emitter flag is raised 248 . If the flag is raised then the emitter and the emitter status LEDs are turned on 250 and the main loop ends. If neither the manual override nor the emitter flag are raised, the emitter and emitter status LED are turned off 254 , the emitter status LED turns off 256 and the main loop ends 258 .
  • the compiled code resident in the downrange controller Zworld BL 1800 microcontroller 80 starts and runs. Variables, states and settings are initialized 264 .
  • the hardware ID is checked 266 by polling the four digital inputs of the DRC's 16 position rotary switch to determine the unit's ID number.
  • the handle temp function 268 uses onboard sensors to determine the DRC's current internal temperature. The DRC's fan, defroster, and/or heater are then activated as needed to keep the unit's internal temperature within tolerance limits.
  • the main loop 270 is then started and repeated until shutdown. Once every 15 seconds a subset of commands within the main loop is performed 272 wherein the hardware ID lines are checked and the DRC ID number is ascertained 274 , the handle temperature routine performed 276 , and the check battery voltage routine is performed 278 thereby ending 280 the periodic check cycle. When this main loop periodic subset is completed, the main loop functions are performed.
  • Trigger 1 is a current loop passing through the SIB and all of the DRC's. When Trigger 1 is active, all rangefinders will begin ranging and sending data to their microcontrollers. If Trigger 1 is active, the DRC then determines if the system computer has enabled the DRC for recording 286 . If the DRC is enabled for recording, the microcontroller will begin processing and recording the incoming data from the rangefinder 288 . The rangefinder will continue to range until Trigger 1 returns inactive. The microcontroller will continue to record data from the rangefinder until the rangefinder until no more data is forthcoming or a full sixty seconds of data.
  • the DRC will disable itself 292 for recording if the system computer has placed it in disable after record mode. Disable after record mode 292 prevents new data from being accidentally overwritten before the data can be downloaded to the system computer. If the DRC is in the disabled after record mode then no further rangefinder data will be recorded until the DRC is again enabled by a system computer command. If Trigger 1 is not active but rangefinder data has been received 294 the data can be discarded. The RS485 serial buffer serving the communications with the system computer is then polled 298 to determine if commands have been received. If so, the microprocessor acts appropriately 300 , 302 . Any responses to the system computer are placed in the outgoing queue 304 . The microprocessor next checks the outgoing queue.
  • the queue contains any bytes, they are sent to the system computer through the RS485 serial channel and the queue is cleared 306 , 308 .
  • the RS232 serial buffer serving communications with the keypad/display unit is polled to determine if a keypad keypress has been received 310 . Appropriate responses to any keypresses are generated 312 and the main loop ends 314 .
  • the system computer program comprises a main program and two threads, the frame grabber and the event detection thread.
  • a process events process captures data for later analysis, but the process also executes in real time.
  • a main program is illustrated at 300 .
  • the program starts 302 by initializing data structures 304 entered into databases 306 , 308 , 310 .
  • the data is collected from the on-site survey 312 , the lens mapping procedure, 314 and from the test plan and ballistic models 316 .
  • the program initiates communication linkage and verification with the system hardware through wires and radios along with its internal communications 318 .
  • the event detection and frame grabber threads are initialized and placed in a suspended state 320 .
  • the computer display shows the main menu 322 whereby the operator selects either the capture or analyze modes 324 . In the capture mode filenames are assigned and new data files are opened and initialized for the event 326 .
  • the detection processing mode is selected for either live processing 330 , 332 , 334 , 336 b or post system processing. 338 , 340 , 342 .
  • Live Processing data from a single event is captured and immediately analyzed at 336 b .
  • Post Processing where a large amount of data is to be captured sequentially, the data are recorded and analyzed later via AnalyzeData( ) at 336 a .
  • the System Interface Box (SIB) turns on the default emitter for the selected target after a target is selected 329 or 337 .
  • the event detection sequence initiates when an event is selected at 330 or 338 and the video recorder is started.
  • the thread for monitoring event detection ports on the SIB hardware is shown at 350 .
  • the event detection circuits are reset 354 and the event counters are initialized 356 .
  • the event detection loop starts 360 .
  • the video recorder status is checked 362 and the detection thread is suspended if the video recorder is off 364 . If the video recorder is operating, the video frames from the projectile launcher-mounted camera are recorded on the digital video recorder. If Live Processing mode was selected 366 , then the video frames are also input into a computer memory buffer configured as a six second first in first out (FIFO) memory buffer 368 . If an event has occurred 370 then the event detector circuits are reset 372 and event data are processed 374 .
  • FIFO six second first in first out
  • the burst timer is checked 380 and the burst count is incremented if the burst timeout has passed 382 . If not in the burst mode, or after events are processed 372 , 374 , the event detection loop is restarted 360 .
  • the process for analysis of event data is illustrated at 400 .
  • the process handles the selection, display, and marking of the emitter locations for event images stored in the FIFO buffer by the video frame grabber.
  • User controls for displaying, marking, and saving marking data are provided 404 .
  • An event is selected from the available events 410 .
  • the data related to an event is used to find and display the video frame for the event 412 .
  • the operator verifies the event occurrence and emitter marking, and the marked event data is saved 414 .
  • the marked emitter data is used with the target and emitter coordinate data provided via the survey process to determine aimpoint performance 416 .
  • Aimpoint data for the target and event of interest is saved 418 and combined with the ballistics model to provide flyout of the round towards the target.
  • the operator may then select another event to be analyzed, or return to the previous process 420 , 422 .
  • a projectile dynamics model as is well known in the art is included for both training and testing.
  • the dynamics model calculates the fall of the shot.
  • Such testing may include modifications to the projectile launcher site or triggering mechanisms.
  • the projectile dynamics model may include aerodynamic drag effects as well as lift and gravity forces upon the projectile. Wind and other shocks encountered by the projectile are also included in the dynamics model.
  • Such modeling is well known in the art including the aerodynamic effects of lift and drag caused by the exogenous aerodynamic forces.
  • burst effects are correlated to the actual projectile parameters, for example, the quantity of explosive, the fracture characteristics of the casing and the effects of proximity devices.
  • a survey of the firing range is performed.
  • the system uses survey equipment to measure the three-dimensional coordinates of the targets, reference emitters and shooting position. Since all of the subsequent calculations are based upon these measurements, the accuracy of these coordinates bounds the accuracy of the system.
  • the scenario objectives, whether testing or training, are factored into the design of the range.
  • the location of the firing line is determined along with placement of downrange hardware consisting of rangefinders and IR's, thereby defining the range and available targets.
  • the static emitters preferably should be placed within 20 mils of the targets but may also be placed further from the target with acceptable loss of accuracy.
  • a static target survey is performed in which range data for the shooter, static targets, and emitter three-dimensional coordinates within the live fire range are determined along with ground plane elevation at each target.
  • Equipment used to survey live fire test ranges typically includes theodolites, transits, laser range finders and global positioning systems.
  • the exemplary embodiment uses a survey that incorporates a theodolite angular measurement device with a laser rangefinder to provide azimuth, elevation, and range to a retro-reflective marker with an option to calculate and output three-dimensional coordinates.
  • measurements are referenced to geodetic coordinates.
  • a relative reference method is used by defining a local coordinate system origin (defined as (0, 0, 0)) that coincides with a predefined shooter position.
  • the target and emitter coordinates are measured relative to the shooter position.
  • the desired results are coordinates reported in Northing, Easting, and Elevation coordinates (n, e, h).
  • Survey equipment under ideal circumstances, can supply accuracies on the order of at least + or ⁇ 1 mm.
  • the uncertainty in the target and reference emitter position can increase to + or ⁇ 0.1 meter (for each) due to additional errors that can occur during subsequent target and reference emitter placement.
  • the target and reference emitters will be positioned before the survey.
  • the target and reference emitter coordinates will then be obtained by surveying to their centers.
  • the best achievable uncertainty in their placement is considered to be + or ⁇ 0.01 meters (for each). Therefore, with respect to the total uncertainty error the survey contributes + or ⁇ 0.01 meters for each of the shooter position, target position and emitter position.
  • Communication and control connections are made between the operation station (comprising the system computer 26 and the SIB 28 ) and the firing line and downrange hardware. Some connections are made through cables and other connections are made through radio means (including radio relays). After controls are established, the components are set to their specific addresses for communication with the SIB. In particular, the trigger events are established and each camera is associated with each projectile launcher. It is important that the location of each target and IR emitter within the range be precisely determined. With this range data determined the Input Files are input into the computer.
  • the targets and emitters are located on the firing range at specific northing, easting, and elevation coordinates provided by survey.
  • the accuracy of the target and emitter positions depends on when the survey is preformed relative to placing the targets and emitters on the range.
  • One method is to survey and mark the desired positions on the ground, followed by the later placement of the targets and emitters. Adding measured elevation offsets to the ground survey positions give the final coordinates of the targets and emitters. Using this method, estimated errors in the coordinates range from + or ⁇ 50 mm to + or ⁇ 100 mm.
  • targets and emitters are located by securely positioning the targets on the firing range at the approximate ranges desired, and then surveying directly to the target and emitters. If surveying is performed after placement, the coordinates of the targets and emitters should fall within + or ⁇ 10 mm or better. Recording measured offsets of the target centroids from the ground are used to evaluate projectile ground impact in the immediate area of the target.
  • the centroid of each emitter is located by visual interpolation. Marking precision is increased by allowing for sub-pixel marking via a zoom function (implemented in the system software). The zoom function magnifies the area of the image that contains the emitter signature. Sub-pixel marking precision is the inverse of the zoom factor chosen; a zoom factor of 4 provides sub-pixel precision of 0.25 pixels. Marking of the emitter centroid is performed by noting the horizontal and vertical dimensions of the emitter image in pixels. The location of the emitter centroid is determined by dividing these dimensions in half.
  • the shooter, emitter, and target geometry forms a long, narrow triangle with the shooter at the apex and the emitter and target located downrange at the other triangle vertices. Equations describing the angles from the reference emitter to the target are shown along with the sight line-emitter angles with reference to the CCD camera image.
  • the shooting position will preferably be located beforehand by having the surveyor place a marker at the defined shooting position on the ground.
  • the accuracy of the shooting position coordinates would follow the best-case coordinate accuracies of + or ⁇ 10 mm.
  • the process for finding the sightline to target aimpoint angles is performed.
  • the process consists of four basic steps: 1) Using the 3D coordinates of the shooter, emitter, and target, calculate the emitter and target angles (vertical and horizontal) relative to a line parallel to a reference (northing) axis. 2) Calculating the emitter to target angles by subtracting the emitter angles from the target angles. 3) Measuring the sightline to emitter angles by marking the emitter centroid on the CCD image captured when a shot is fired and correct for boresight angular errors. 4) Using the calculations made from survey coordinates and the calculations made via the CCD measurements for the particular shooter, target, and emitter combination, calculating the sightline to target angles by subtracting the emitter to target angles from the sightline to emitter angles.
  • the Lens Mapping Procedure will now be described.
  • the lens mapping is preferably performed onsite after setup of the equipment to ensure that the lens mapping accurately reflects the camera and lens settings.
  • the location of an emitter in the camera image is used in finding the angular offset of the emitter from the camera's optical axis, and subsequently the angular offset from the projectile launcher's line of sight.
  • camera and lens combinations are mapped to determine the angle represented by each pixel of the camera's imaging device. Locating the camera and lens at a known distance in millimeters from a target calibrated in millimeters, and capturing the resulting image comprise camera/lens mapping.
  • the captured image (in digital format) is examined to determine the relationship between the pixels and the calibration target markings.
  • the LMP is a function of the system computer software and is used to provide the microRadiansperPixel_H and microRadiansperPixel_V values associated with a particular CCD camera and lens combination that will be mounted to a projectile launcher under test during a scenario.
  • the microradians per pixel across the field of view of the camera/lens combination are calculated from the relationship of the pixels to the linear target markings, and the known distance from the camera lens to the target.
  • a standard camera faceplate format of 1 ⁇ 2′′ is typically used although 1 ⁇ 4′′, 1 ⁇ 3′′ and 2 ⁇ 3′′ formats as well as nonstandard formats are suitable.
  • the particular camera and lens that are suitable for the field of view and range are selected for the aimpoint measurement task.
  • the selection of a specific lens is dependent on both the camera selected (due to the camera image format used and also due to the lens mounting requirements) and on the focal length needed to provide a suitable field of view for the downrange targets.
  • the field of view (FOV) for the 1 ⁇ 3 in CCTV format is less than the FOV for the 1 ⁇ 2 in.
  • the 1 ⁇ 3 in. format provides a higher angular resolution at the expense of a smaller field of view.
  • the field coverage must provide for the camera's imaging of a target's reference emitter when the projectile launcher is sighted on the target. This requirement originates from the need to mark the reference emitter when a shot is fired at a target.
  • each scenario will have its own particular requirements, and each scenario should be analyzed to determine what field coverage is required.
  • field coverage falls between 10 meters and 100 meters although larger fields are within the contemplation of the present invention. Smaller field coverages lead to higher system measurement resolution, but the emitters will need to be closer to the targets. Larger field coverages will reduce the measurement resolution, but will allow larger target to emitter separations and will help to ensure that the emitter will be visible when a shot is fired.
  • the camera After the camera and lens are selected the camera is mounted to a tripod and placed near the planned shooter position. Three reference emitters are located at a distance from the camera that approximates the range of the planned scenario targets. The three reference emitters should be placed in a straight line, preferably parallel to the horizon. To align the reference emitters, the CCD camera image is referred to and the spacing is arranged to provide for all three emitters to be in the camera's FOV when the center of the camera's FOV is aligned to the left-most emitter.
  • the second emitter should be preferably placed at about 25% of the image width from the image center towards the right side of the image.
  • the third emitter should be placed preferably at about 70% of this same width. This placement of the emitters will allow for the camera to be rotated 90 degrees counterclockwise and the same three emitters to be used when performing the vertical lens mapping.
  • the 3D coordinates of the camera and the three emitters are measured and recorded.
  • the camera can be set to be the origin with the reference axis defined by the line between the camera and the left-most emitter. With the camera mounted on a tripod, the camera is panned and tilted to visually attempt to align the center of the left-most emitter to be in the center of the camera image. It is to be appreciated that all three emitters are operating and are within the camera's horizontal FOV.
  • the camera and emitter 3D coordinates are calculated.
  • the data is entered into a software program that is part of the system software.
  • the camera and emitter 3D coordinates are entered.
  • the camera image is brought up on the computer screen and the process of performing the fine camera alignment and marking the emitters for lens mapping is initiated.
  • the mouse cursor is placed over the emitters and the (pix.x, pix.y) pixel coordinates are viewed on the screen.
  • the cursor is placed over the E o and the pixel coordinates are checked against the center of the CCD image coordinates.
  • the camera is panned and tilted until E o is at the pixel coordinates for the center of the image.
  • the cursor is placed over the emitter E 2H and checked if the pix.y value for E2H is the same as E o . If the pix.y coordinates are not the same, the camera is rolled about its axis to make both E o and E 2H (and also E 1H ) have the same pix.y value.
  • the camera is rotated 90 degrees counterclockwise and aligned visually so that the left-most emitter becomes the center emitter within the CCD image with the remaining emitters being within the camera's vertical FOV.
  • the adjustment of the camera to properly image the three emitters for the vertical lens mapping is similar to the horizontal alignment, except that the pix.x coordinates of all three emitters should be the same value, which should be the pixel coordinate of the horizontal center of the image.
  • the camera is panned, tilted, and rolled to perform the alignment and then mark the emitters in the same order. After the last emitter is marked for the vertical lens mapping, the horizontal and vertical tens mapping values will be displayed and an option for storing the values will be given.
  • Lens map values are stored with a name that corresponds with the camera and lens combination used for later retrieval during the live fire aimpoint scenario.
  • the errors that can occur in the camera/lens mapping are: errors in the camera/lens distance to the target, errors in the target calibration, and errors in the reading of the pixel relationship to the target.
  • standard deviation of the pixel in terms of angle was determined to be less than 0.15 milliradians (mrad). This angle is considered to be the lens mapping uncertainty contributing to total system error.
  • the angular value is constant and does not change with range.
  • the camera is mounted to the projectile launcher, preferably with a rigid mount providing a view of the emitters over the entire projectile launcher super-elevation range.
  • the linear offsets (horizontal and vertical) of the camera aperture from the projectile launcher sight line are measured.
  • the ideal alignment of the camera for measuring emitter angles would have the optical axis of the camera lens coaxial with the sightline of the projectile launcher, and the camera lens located at the surveyed shooter position coordinates.
  • This is not practical for live fire since the camera must be mounted out of the way of the projectile path and any projectile launcher operations.
  • the mounting requirements lead to coordinate offsets and angular deviations of the optical axis of the lens from the projectile launcher sightline, which are corrected by the boresight process.
  • the boresight process provides a set of boresight angles that are used to correct the subsequent measured emitter to sightline angles obtained during the firing event.
  • the boresight angles derived from the boresight emitter measurement includes angular deviations between the sightline and the camera optical axis as well as apparent angular deviations due to the coordinate offset from the sightline, assuming that the optical axis to sightline offsets are negligible.
  • the optical axis to sightline offsets are negligible.
  • even small optical axis to sightline offsets will cause apparent angular deviations, and their effect should be considered.
  • the portion of the boresight angles that is due to the sightline to camera offset is mathematically subtracted from the aimpoint calculations. This adjustment is done by calculating the angular deviations due to the sightline to camera offset at the boresight range, and then subtracting those angles from the angular deviations that are due to the camera lens offset at the target emitter range. This difference is than added to the boresight angles to produce the boresight correction angles for emitters at ranges other than the boresight range. As can be appreciated by those skilled in the art, there will still be some residual errors after this boresight correction process due to measurement uncertainty.
  • the CCD camera When the CCD camera is preferably rigidly mounted to the projectile launcher under test, it must be aligned to the projectile launcher sightline. Some adjustment capability is provided in the mount to adjust the optical axis of the camera lens to be approximately parallel to the projectile launcher sightline. If the optical axis is parallel, there will be a constant linear offset between the sightline and the optical axis, but no angular offset. However, the constant linear offset translates into an angular offset, which reduces linearly with increasing range. Since the mechanical alignment process is relatively coarse, and electronic alignment, or boresighting, is performed to find the residual error for removal during actual aimpoint error measurements.
  • Aiming the projectile launcher at the boresight target and emitter pair that is placed at a known range performs the boresight calibration.
  • An expert gunner fires one or more shots (preferably three) at the boresight target and the boresight emitter centroid is marked on the CCD image.
  • the shot group is analyzed to verify proper aiming by the expert gunner relative to the projectile launcher under test. If acceptable, the average of the shot grouping is used as the boresight correction values to account for the remaining angular offset of the optical axis to the sightline.
  • the sight settings remain unchanged after the boresight process is completed.
  • the acceptability of the aimpoint errors is dependent on the projectile launcher under test.
  • an expert M16 gunner using the iron sights can aim at a well-marked target to within 0.5 milliradians (mrad), while sighting errors for an M203 Quad sight may be as high as 4 mrad.
  • adjustable sight projectile launchers for example OICW, OCSW, MK-19, M203, etc.
  • the adjustable sights cause two additional concerns caused by the projectile launchers' significant changes in the sightline to barrel angle relative to changes in the target range.
  • the adjustable sight design is used to produce super-elevation of the barrel to fly a projectile to a target. In some cases the elevation angle may be on the order of 36 degrees.
  • the change in the sightline to barrel angle produces a change in the camera optical axis (OA) to sightline (SL) angle and offset.
  • the primary concerns are that since the camera is attached to the barrel, the camera will rotate with the barrel's super-elevation. Since the camera is attached to the barrel of the projectile launcher, the camera will rotate with the barrel's super-elevation. If the barrel is super-elevated more than a few degrees, the infrared reference emitters may be out of the camera's field of view (FOV) and the emitter cannot be marked.
  • FOV field of view
  • the loss of emitter in the camera's FOV be accommodated by a mount that provides for indexed angular rotation of the camera to counteract the super-elevation of the barrel.
  • a selection of indexed camera rotation angles can be used to keep the emitter within the FOV for a group of sight angle settings. The number of camera rotation settings depends on the range of the sight angles versus the FOV.
  • emitters can be mounted at higher elevations on the range so that they are in the FOV when the projectile launcher is super-elevated.
  • a separate boresight for each indexed sight setting could be performed.
  • each setting of the sight along with any required rotation of the camera, would act as an individual fixed sight projectile launcher.
  • a family of boresight and OA to SL offsets would need to be saved for each sight setting that is planned for use in an experiment or exercise.
  • the shooter would have to indicate the current indexed sight setting for the firing event.
  • the software would then use the boresight and OA to SL offsets for that sight setting via either a mathematical sightline model or a sight setting lookup table.
  • the result of using either of these two methods for adjustable sights in the software will result in not additional uncertainties in aimpoint measurements.
  • the goal of the moving target measurement system is to provide the 3D coordinates of the moving target centroid at the time that a shot is fired (to determine the projectile launcher sightline angles relative to the target center), and at the time of the terminal position of the fired round (where it intersects the target path plane, hits the ground, or air bursts), to differentiate the location of the moving target and the round at the time the round reaches its terminus, or intersects the target.
  • the location of the moving targets' centroid like the fixed targets' centroid is determined with reference to the infrared emitters.
  • Moving targets, along with associated emitters and laser rangefinders are established during setup.
  • the laser rangefinders are aligned and the downrange controllers are setup.
  • the laser rangefinder is zeroed at the moving target home or initial position. Then, the moving target and emitter 3D coordinates at the home (or initial) position and the end positions are obtained.
  • FIGS. 11 and 12 which depict the target path plane and identify the target angles from the shooter's position, the sightline offset angles from the target and the subsequent sightline angles, the method of calculating the sightline intersection with the target plane will now be explained.
  • the 3D coordinates of the moving target and emitter at the time a shot is fired are found by measuring the target sled offset from the target sled home position via a rangefinder. This offset is used to produce a 3D offset relative to surveyed 3D coordinates of the target and emitter at the home position.
  • the 3D offsets are added to the home position 3D coordinates to produce the target and emitter 3D coordinates within the test range coordinate system at the time of an event.
  • target and emitter 3D coordinates at shot fired are used, along with the CCD image pixel coordinates of the emitter, to produce the sightline offset angles relative to the target center. Once the target and emitter 3D coordinates within the test range are found, the calculation of the sightline offset angles follows the same procedure that is used for static targets described hereinabove.
  • the sightline offset angle is added to the horizontal and vertical angles of the target within the firing range coordinate system (derived from the target's surveyed 3D coordinates) in order to determine the horizontal and vertical angles of the sightline in the firing range coordinate system. After sightline orientation is found, the intersection of the sightline with the target path plane is calculated.
  • the 3D coordinate of the fired round on the target path plane can be found by subtracting the drop from the sightline intersection with the target path plane.
  • the moving target setup will now be described.
  • the target sled moves along a linear track under remote control.
  • motion is provided by an electric motor powered by 12 VDC batteries although pneumatic, hydraulic or other mechanical drivers that are well known in the art may provide motion.
  • the target sled can move in either direction along the track and the target can be popped up or down.
  • Remote control may be independent of the system computer.
  • the target track while not necessarily level, is assumed to be flat and straight such that it does not cause up and down or left and right deviations of the target from a straight line of more than 20 mm. While an assumption of 20 mm may be optimistic, this assumption allows the target path to be considered as following a straight line.
  • the beginning and ending target sled positions (also referred to as “home” and “end”) are electronically referenced to insure that these positions are known and repeatable. Sensors are used on the track to detect the home and end reference positions of the target sled.
  • the target and emitter centroid positions are measured in 3D coordinates (via survey to + or ⁇ 10 mm) for the home and end positions of the target sled. These 3D target and emitter centroid home and end positions are defined as T B & T E , and E B & E E , respectively.
  • the moving target setup procedure requires that the rangefinder SetRangefinderZero( ) command be sent at the time of surveying the target's home position 3D coordinates. Subsequent rangefinder readings of the target sled position via the rangefinder will provide the relative target sled offset from the target home position, T B .
  • the 3D coordinates of T B and T E are used to produce a 3D unit vector t B t E by the linear target offset from the home position of the track. Summing this offset with the 3D coordinates of the target home position T B , provides the 3D coordinates of the intermediate target position.
  • a system emitter is physically attached to the moving target sled, and the position of the emitter centroid relative to the target centroid is fixed.
  • the 3D coordinates of the emitter centroid at an intermediate target position are calculated by summing T O with the 3D coordinates of the emitter at the home position, E B since the target and emitter are assumed to follow parallel paths).
  • the 3D coordinates of the target and emitter centroid at the intermediate target sled position are used to find the shooter's aimpoint angles ( ⁇ x , ⁇ y ) relative to the target centroid.
  • the aimpoint angles produced will be for the aimpoint at the point in time that corresponds to the video frame used to mark the emitter position.
  • target linear offsets are captured continuously during target movement and referenced to a event start time. These offsets and time references are stored during the event and transferred to the system computer after the completion of the event.
  • the point in time corresponding to the midpoint of the video frame selected for marking the emitter position t@MarkedFrame
  • t@shotFired is used to retrieve the target linear offset for when the shot was fired.
  • the linear offset, d offset of the target sled from the home position for a particular time is provided via an optical rangefinder that bounces an infrared beam off a retroreflector attached to the moving target sled.
  • the offset is provided to an accuracy of + or ⁇ 10 mm. This is equal to the measurement accuracy specification of the rangefinder for an average of twenty 2000 Hz measurements providing a 200 Hz sample output (5 nanosecond).
  • the linear target sled offset, d offset is multiplied by the track unit vector t B t E , to arrive at the 3D coordinate offset of the target sled, T O , relative to the home position of the target sled.
  • T 3D T B +T O
  • E 3D T O +E B
  • the location of the emitter in the CCD camera image will be used to determine the aimpoint angles ( ⁇ x , ⁇ y ) of the sightline relative to the moving target centroid at the t@MarkedFrame, time. These aimpoint angles for the moving target will be calculated using the same methods as for static targets, except for the fact that the target and emitter 3D coordinates, T MF and E MF , will be provided via calculations that use the rangefinder and survey data.
  • the 3D coordinates of the target T SF will be found by retrieving the target linear offset from the rangefinder at the t@shotfired time.
  • the aimpoint angles ( ⁇ x , ⁇ y ) will be summed with the angles to T SF ( ⁇ e , ⁇ h ) to produce the sightline angles, (E e ,E h ), of the sightline vector, psl I within the firing range relative to the surveyed shooter position P.
  • a sightline unit vector is created by using the sightline angles (E e ,E h ) to find the direction of the sightline within the firing range.
  • psl I ) ( n psl ,e psl ,h psl )
  • RI Round intercept
  • KE kinetic energy
  • HE high explosive
  • the RI occurs when the round either passes through the target path plane or the round impacts the ground short of the target.
  • the RI occurs at whichever occurs first: the round impacts the target, the round impacts the ground, or, if fused, the fuse time expires.
  • the sightline angles and the sightline intersection with the target path plane must be known in order to define the path of the round. If the round impacts the ground, or airbursts, the distance from the shooter must also be known to calculate the 3D coordinate.
  • the time-of-flight of the round at round intercept, TOF RI must be known to determine the target sled offset and to find the target 3D coordinates, T RI , at the time of RI.
  • the ground range to the target path plane in the n-e plane of the firing range and the sightline elevation angle determines the time-of-flight of the round unless the round impacts the ground before the target path plane.
  • the sightline vector intersection, SL I to be described hereinbelow, with the plane of the moving path is calculated.
  • the 3D coordinates of the shooter, P, and the sightline intersection, SLI is used to find the slant range to the target plane, r TP .
  • the r RI range is the corresponding ground range to the target path plane parallel to the n-e plane of the firing range.
  • the range r RI is used to set the maximum range for the ballistics model to fly the round towards the target path plane.
  • the ballistics model provides the time-of-flight to round impact, TOF RI , for the KE round in use.
  • TOF RI TOF TP (time of flight to the target path plane)
  • TOF RI TOF GI (time of flight to ground impact).
  • R GI is defined at the 3D coordinate where the ballistics calculation indicates that the round drops to the plane of the ground.
  • the time-of-flight of the round stops at the point of ground impact and the time is labeled TOF GI .
  • TOF RI TOF GI
  • the TOF RI will be used to determine the moving centroid coordinates, T RI , at the time that the round arrives downrange.
  • a target sled offset, d RI via the rangefinder at the shot fired time plus the TOF RI and repeating the process for finding the 3D coordinate of the target centroid within the firing range at the round intercept time, T RI .
  • T RI round intercept time
  • Airburst HE rounds are fused for a particular range and the time-of-flight for an air burst is dependent on the fuse range and the speed of the round. Since an HE round is still considered active after it passes the target, the intersection of the round with the target path plane is only important if the round actually hits the target (in which case the round is assumed to have exploded). If the round passes through the target path plane and then air bursts or impacts the ground, a hit or miss calculation can be performed using the target and round coordinates to determine if the target was hit.
  • the air burst or ground impact coordinates are calculated and can be used to find the distance from R RI to the target at T RI .
  • the 3D coordinate R GI is also needed to determine the distance from R RI to the target at T RI .
  • the first step is to fly the round out using the ballistics model and the aimpoint angles derived from the emitter marking at shot fired. If the found is fused, then the round is only flown out to the fuse range. Then a check is made to determine if the round air burst or hit the ground.
  • the rounds terminal range for either air burst or ground impact
  • the rounds terminal range is before the target path plane or after the moving plane. If the terminal range is less than the range to the target path plane, the round has either hit the ground or airburst in front of the target path plane. No target impact is then possible. If the terminal range is greater than or equal to the range to the target path plane, then the round has either hit the target, hit the ground, or airburst. Since the determination of an air burst or ground impact has already been performed, the only possibility that needs to be checked is whether a target impact has occurred. The result is that the sightline intercept with the target path plane must be calculated to determine the TOF TP to the target path plane. This is the same calculation described hereinabove for the KE rounds.
  • the 3D positions of the round at TOF TP , R TP can be compared with the 3D position of the target at TOF TP , T TP , T TP .
  • can be used to determine if a hit or miss occurred, where a hit would occur if the distance
  • the sightline vector is defined by the shooter position P and the sightline angles (E e ,E h ) into the firing range ( FIG. 11 ).
  • the target path plane contains the target home and end positions, T B and T E , and is specified to be a vertically oriented plane parallel to the h-axis.
  • the first step in defining the target path plane is to add an additional point to the beginning and ending target coordinates T B and T E , to make up the required the required third point for a plane.
  • T B , T E , and T′ B define a plane that includes the path of the target along the track and is vertically oriented to be parallel to the elevation axis.
  • the A,B, and C coefficients are the components of a vector that is normal to the target path plane, TPP N .
  • the coefficient D is calculated by taking the negative determinant of the three co-planar points, T B T E and T′ B , and setting the result to be equal to D.
  • This unit normal vector tpPN and distance d will be used in finding the intersection of the sightline vector with the target path plane.
  • the sightline vector is defined by a starting point, the shooter position P, and set of easting and elevation angles (E e ,E h ), providing the projection direction into the firing range coordinates system from P.
  • ) ( n psI ,e psl ,h psl ) and, r TP is the unknown distance from the shooter P to the sightline vector intersection with the target path plane at SL I .
  • a shooter will align the projectile launcher site with a target.
  • the center of the lens image at the trigger event determines the reference for aimpoint.
  • the infrared sensors illuminate at the trigger event to provide distance measurements from the aimpoint reference to the fixed IR emitter or the moving target position as determined by the downrange controller.
  • the aimpoint and projectile launcher effects are calculated, preferably using the software resident in the system computer. Output from the sensors and the camera are integrated into the VCR with titling for event analysis.
  • the information collected, including references and calculated aimpoint and projectile dynamics provide information for training and projectile launchers testing.
  • Error between the actual and predicted aimpoint result from uncertainties associated with the shooter, target, and emitter positions.
  • the errors are range dependent and are minor contributors to the overall error.
  • error associated with the offset of the CCD camera from the projectile launcher sightline lens mapping and the emitter marking process can also produce minor errors and these errors can be minimized through techniques to be described hereinafter.
  • the uncertainty associated with the bore-sight as performed by an expert gunner is the limiting factor in the cumulative error of the system.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)

Abstract

An exemplary embodiment of the invention relates to determining aimpoint relative to a target located within a firing range. The target can be either fixed or moving, and if the target is moving then it can be mounted on tracks or controlled by servomechanisms. In addition to aimpoint, predicted projectile effects and personnel performance can be calculated for evaluation and for training. The firing range is mapped so that placement of infrared emitters is associated with specific targets. Placement of the emitter external to the target perimeter produces accurate results so long as the emitter is within the field of view of a camera mounted upon the projectile launcher. The invention results in large firing ranges with the ability to place targets wherever desired without decreasing aimpoint measurement accuracy.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • This invention relates generally to projectile targeting and more particularly to small projectile trajectory analysis.
  • 2. Description of the Related Art
  • Determination of aimpoint is well known. Aimpoint is used in actual field situations to determine a fire control solution. Aimpoint is also used in training weapons operators to sharpen their skills and to improve their performance. U.S. Pat. Nos. 4,804,325 and 5,213,503 each of which is incorporated herein by reference illustrate representative relevant techniques in the prior art.
  • U.S. Pat. No. 4,804,325 discloses a weapons training simulator that predicts aimpoint and trajectory. The invention uses a sensor mounted on a weapon to generate a target position signal based on point sources located within the perimeter of simulated targets, thereby defining a diffusely illuminated target field. Set in a real-world environment the aimpoint is determined, using the output from light emitting diodes (LED's), defining point sources to a quadrature detector array to create uniform diffuse sources.
  • U.S. Pat. No. 5,213,503 discloses a commercially available aimpoint infrared spot tracking system that includes a charge coupled device (CCD) video camera interfaced to a digital frame grabber operating at standard video rates for use in simulator training. A lens system images the tracking area (i.e., video projection screen) onto the CCD imaging sensor. The frame grabber digitizes each frame of video data collected by the CCD camera. This data is further processed with digital signal processing hardware as well as software algorithms to find position coordinates of the imaged IR spot. The 503′ patent uses tracking system software to allow the aimpoint to be continuously monitored during a training scenario. For this application a CCD-based tracking system or similar device utilizing a two-dimensional position sensing detector (PSD) lateral-effect photodiode provides the aimpoint position data. The aimpoint analysis of the 503′ patent is limited to a virtual environment with all targets displayed upon a video projection screen. In particular, an infrared source is mounted to the weapon and the beam from the infrared source-is projected onto the screen.
  • Live testing is generally performed to determine the path of projectiles by physically tracking the projectile path (hereafter the “fall of shot”) from the nozzle exit to the actual end-point of the projectile path, which could be the location at detonation. In particular, aimpoint analysis is the focus of testing when experimenting with new weapon sights or fire control systems. As used in this specification weapon and projectile launcher are used interchangeably.
  • There are several disadvantages to live testing including technical obstacles to tracking the projectile. For example, ballistic radar is the technology most commonly used to track the fall of shot, but radar cannot track sub-sonic rounds. In addition, measuring the fall of shot by any method cannot separate system error from user error, round-to-round dispersion or environmental error. Furthermore, live testing typically requires the expenditure of large numbers of projectiles: this can become expensive especially with prototypes. Finally, measuring the fall of shot captures only a snapshot of the result of the projectile's flight.
  • In addition to live fire testing, small arms weapons simulators are used extensively for training. During training, it is important for a simulator to replicate the environment that a shooter could encounter. In the real world, targets may be either stationary moving. In training operators, determining the time required to acquire the target, engage the target, and manipulate a fire control system as well as other projectile launcher-handling data are useful. In training, it is also important to minimize the expenditure of ammunition while maximizing the training benefits. Especially important is determining the accuracy of the shooter so that the shooter can improve his or her skill. A need exists to measure a shooter's aimpoint and to predict the impact of the projectile from the projectile launcher aimpoint.
  • Therefore, there is a need for determining aimpoint that allows the small arms testing community to separate the aimpoint of a projectile launcher from the actual fall of shot in physical testing environments. In addition, there is a further need to determine aimpoint for training weapons operators.
  • BRIEF SUMMARY OF THE INVENTION
  • It is an object of the invention to track aimpoint.
  • It is another object of the invention to calculate the predicted trajectory rather than the actual trajectory of the projectile.
  • It is a further object of the invention to compute the difference between the actual aimpoint and the aimpoint required to hit the target: user error.
  • It is still another object of the invention to be able to modify the calculated projectile path to incorporate system error dispersion, and external effects to determine the probability of a hit or contact (P(h)).
  • It is yet another object of the invention to capture the aimpoint at the moment of dry-fire.
  • It is further an additional object of the invention to predict the effects of the projectile in the impact zone.
  • It is a final object of the invention to collect digitized video of the projectile launcher's aimpoint during the aiming procedure to evaluate the shooter's aiming performance.
  • In order to accomplish the above objects, in accordance with a first aspect of the present invention there is provided an aimpoint tracking and data collection system for projectiles during live fire testing or training events. The exemplary embodiment uses infrared reference emitters and a projectile launcher-mounted camera system to measure aimpoint relative to predefined targets on a live testing or training range. Calculated aimpoint angles may be used in conjunction with a ballistics model to predict a projectile's miss distance at the plane of the intended target. In addition, if the projectile is from a weapon then the effects may be incorporated to predict the impact.
  • The system according to the first aspect may further include moving targets.
  • The system according to the first aspect may further include a computer program for performing coordination and control of the equipment and calculations. The calculation may further include the effects of external conditions such as wind on the trajectory.
  • In a second aspect of the present invention a method for determining projectile launcher aimpoint is disclosed. The method comprises surveying a firing range for determining the location of target and emitter coordinates, calculating the centroid of a target having a defined perimeter, the target being placed on the firing range, placing fixed infrared emitting sensors on the firing range wherein the sensors are located external to the target perimeter, selecting the location of the shooter at a firing line, mounting a camera to the projectile launcher, calibrating the optical axis of a camera with the boresight of the projectile launcher, mapping the camera with the target and emitter position, providing equipment means for controlling the emitter and for determining the predicted aimpoint at a triggering event, determining the projectile launcher aimpoint at the triggering event, and outputting the measuring projectile launcher aimpoint at the triggering event.
  • These and other features and advantages of the present invention may be better understood by considering the following detailed description of certain preferred embodiments. In the course of this description, reference will frequently be made to the attached drawings.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
  • Referring now to the drawing wherein like elements are numbered alike in the several FIGURES:
  • FIG. 1 is a perspective view of an exemplary embodiment illustrating the location of components of the present invention;
  • FIG. 2 is a signal flowchart of an exemplary embodiment of the present invention;
  • FIG. 3 is a signal flowchart for the emitter controller of the present invention;
  • FIG. 4 is a signal flowchart for the downrange controller of the present invention;
  • FIG. 5 is a signal flowchart for the system interface box of the present invention; and
  • FIG. 6 is a flow diagram of the software for controlling the illustrative embodiment of the present invention;
  • FIG. 7 is a software program flowchart of the System Interface Box (SIB);
  • FIG. 8 is a software program flowchart of the Emitter Controller (EC);
  • FIGS. 9 a, 9 b, and 9 c, sheets 1, 2, and 3 respectively are a flow diagram of the software for controlling the illustrative embodiment of the present invention;
  • FIG. 10 is a plan view illustrating a firing range showing the mathematical relationships for fixed targets, emitters and a projectile launcher, and includes the boresighting relationship between the projectile launcher aimpoint and the camera optical axis;
  • FIG. 11 is a plan view of a firing range showing the sightline and the sightline intersection with the moving target path plane indicating the mathematical relationship between the sightline and the moving target at shot fired time.
  • FIG. 12 is a plan view illustrating a track mounted moving target with attached emitter illustrating the target, shot, and round trajectories, along with equations for calculating the coordinates at the time of the round intercept with the target path plane.
  • DETAILED DESCRIPTION OF THE INVENTION
  • In the following detailed description of the preferred embodiment, reference is made to the accompanying drawings in which is shown by way of illustration a specific embodiment whereby the invention may be practiced. It is to be understood that other embodiments may be utilized and changes may be made without departing from the scope of the present invention.
  • Referring to FIG. 1 and FIG. 2, an exemplary embodiment a projectile launcher aimpoint tracking system is shown generally at 10. A camera 12 is affixed to a mount 14, and the mount is affixed to a projectile launcher such as a rifle 16. Stationary targets 18 or moving targets 20 including both stationary and moving infrared emitters 24, respectively along with emitter controllers 22 are located in a firing range. A firing range is a volume where targets are located and can include an entire battle theatre. Infrared emitters 24, typically LED's or optically filtered halogen lamps, are located downrange as references for the determination of target location. The associated emitter controller 22 controls each infrared emitter and receives signals from the system interface box 28 or a downrange controller 25 which communicates with the emitter controller by radio transmission. A computer 26 provides the overall control of the system with interfaces provided though a system interface box 28, each being mounted in a rack 30. Radio transmitters 32, relays 34 and receivers 36 along with connecting cables 38 provide communication connections between system components. Sensors that detect projectile launcher related events, for example but not limited to firing, lasing, etc. include one or more of a microphone 40 a, a powered sensor 40 b, or a normally open switch 40 c to provide input to the system interface box 28. In addition a video recorder 42, for example a videocassette recorder, digital recorder, digital compact disk recorder, etc., receives and stores the camera video and event information, including information from the trigger sensors.
  • The projectile launcher mounted camera 12 views the live-fire range from the perspective of the projectile launcher bore sight. Both the camera and its lens 44 are removable and may be replaced with a different camera and or different lens for each test depending upon the test objectives. Typical variables for selecting the camera and lens are the camera size, resolution, required field-of-view, and mounting requirements. For example, small-bore projectile launchers require a small, light camera to minimize interference with operator control and a narrow field-of-view to maximize angular resolution per video pixel. As a further example, a wider field-of-view is required if the targets are dispersed over a wide firing range, such as battle theatre, for example 30 degrees as opposed to a narrow target range, for example 8 degrees.
  • In the exemplary embodiment, the camera 12 is an ELMO CCD (charged coupled device) Black and White Camera, model ME4111R that does not have the typical near-infrared blocking filter installed (although other suitable cameras are commercially available, for example, a small rectangular ½″ format camera with an electronic high speed shutter manufactured by Panasonic that can have its near-infrared blocking filter removed). The image pick-up device is a ½ inch interline-transfer CCD with 768 H×494 V effective picture elements (pixels). The camera typically consists of a camera head with lens, a camera control unit (CCU) and an interconnecting cable. The CCU is powered by 12 VDC from either a battery or an AC adapter. The camera is operated in the standard NTSC mode with a vertical frequency (field) of 59.94 Hz, horizontal frequency (line) of 15.734 kHz and provides a composite video output.
  • The Elmo camera uses an Elmo lens mounting and two lenses, having focal lengths of 24 mm and 36 mm. The Panasonic camera uses standard C mount lenses and a single 35 mm lens along with two focal length adapters to provide a 1.5× and or 2× focal length products. Using the adapters, the 35 mm lens can be modified to provide 52.5, 70, and 105 mm focal lengths by using combinations of the adapters.
  • The camera mount 14 is adjustable permitting the camera lens optical axis to be aligned roughly with the projectile launcher bore sight line. A software managed boresight process is used for fine aligning the angular offset of the camera optical axis from the projectile launcher boresight line. After the camera is roughly aligned, the remaining angular and linear camera axis offset from the sightline is recorded and used in subsequent projectile launcher aimpoint error calculations. In particular, when viewing targets at any distance, the angular and linear offsets of the camera optical axis from the sightline is corrected via the software calculations to remove effects upon the aimpoint measurement accuracy thereby eliminating the need for optical corrective algorithms.
  • It is to be appreciated that in the illustrated embodiment the field coverage must provide for the camera 12 imaging of a target's reference emitter 24 when the projectile launcher 16 is sighted on a target 18, 20. This requirement originates from the need to mark the reference emitter when a projectile is fired at the target. From this field of coverage it is to be further appreciated that the field coverage is optimally between 10 meters and 100 meters at the target distance thereby permitting location of the emitter outside of the perimeter of the target so long as the emitter is within the field of camera coverage.
  • A titler 46, located in the operator station main rack 30, overlays text onto the incoming camera video and transmits the result to the VCR 42. The titler is responsive to the system computer 26 through an RS232 interface and provides output signals to the system computer through the RS232 interface and to the VCR through an RS170 video interface. The system computer provides commands to the titter to overlay test data onto the camera video for correlation to events of interest occurring during the test.
  • Referring to FIG. 2 and FIG. 5 The digital video recorder 42 is a typical, commercially available cassette unit, but may alternately be a digital recorder that utilizes other media such as digital disks or internal hard disks. In the exemplary embodiment, the digital recorder is mounted in the operator station main rack 30. During testing, the recorder, records the video from the camera 12, the titler 46, as well as the main and auxiliary trigger events from the system interface box 28 on the left and right audio analog channels. This input is first transmitted to a tone generator 50 for the trigger and then to the videocassette recorder 42. The near-infrared CCD image of the live fire range is also output from the video recorder 42 via an RS170 video interface to the system computer's frame grabber board for marking the emitter centroid relative to the projectile launcher sightline. In this manner, the marking of the emitter can be performed immediately after an event. Alternately, the marking of the emitter may be performed during post-test processing, where the videocassette recorder provides recorded event data and test video to the system computer and the computer's frame grabber board to be described hereinbelow. Microphone sensor 40 a inputs representing the trigger events recorded on the left and right audio channels are received by the VCR. The sensor signals 40 a, 40 b, 40 c are also outputted to the system interface box 28 main and auxiliary audio inputs for trigger event re-generation.
  • The frame grabber (not shown) is a commercially available Matrox Orion board that decodes the composite video from the CCD camera 12, described hereinabove, and digitizes the image to a 640H by 480V pixel format. All measurements of the emitter 24 centroid are made on video frames digitized by the frame grabber. The 10-nanosecond jitter of the Matrox Orion board, which equates to less than ten percent of a pixel width, has minimal effect on the emitter centroid measurement and the resulting system error.
  • As is well known in the art, the targets 18, 20 are those typically used for live fire ranges. The targets previously used have been E-silhouettes. Other targets may be used; the only consideration is that there is a defined target centroid. The targets are attached to supports that hold the targets in position.
  • Referring again to FIG. 1, both fixed targets 18 and moving targets 20 are illustrated in the exemplary embodiment. Each moving target is mounted on a track 52 and responds to signals from the system computer 26 to traverse a portion of the firing range through a preprogrammed path when commanded by the computer. Alternatively, a dedicated preprogrammed target motion controller (not shown) can control each target using servomotors. In yet another variation the target motion controller is able to respond to contemporaneous operator commands using servomotors. Mounting the moving targets on tracks simplifies the calculation of location within the firing range by the downrange controller (to be described hereinafter). In alternative embodiments, the location of the moving target is transmitted through alternate means, for example a multiplexed optical signal from a rangefinder or a global positioning system.
  • The exemplary embodiment includes the implementation of a moving target 20 measurement subsystem. The goal of the moving target measurement subsystem is to provide the 3D coordinates of the moving target and emitter centroids at the time that a shot is fired (to determine the projectile launcher sightline angles relative to the target center) and at the time of the terminal position of the fired projectile (where it intersects the target path plane, hits the ground, or air bursts) to differentiate the location of the moving target and the projectile at the time the projectile reaches its terminus or intersects the target. The location of the moving targets' centroid, like the fixed targets' centroid, is determined with reference to the infrared emitters' centroid.
  • The infrared emitters 24 (IR's) are located on the firing range and are used in conjunction with the projectile launcher-mounted CCD camera 12 to determine where a projectile launcher 16 is aimed relative to targets 18, 20 located on the range. In the exemplary embodiment, the emitters for fixed targets 18 are mounted to upright posts 54 that have been driven into the ground to rigidly fix the emitter position. Preferably, the ability to support live fire operations is provided by placing the emitters to one side of the target to minimize the possibility that an emitter will be struck by a projectile. The emitters are located on the range such that the camera 12 can sense the emitter during the projectile launcher aiming process. The camera does not need to sense the target so long as the emitter associated with the target under fire is in the camera's field of view and is identified as the target reference emitter in the system computer program software. The system computer 26 defines the target's reference emitter 24 via a target-emitter database.
  • The emitters 24 are halogen light sources that are optically filtered using an RG780, or generally equivalent, filter to block visible output while allowing near-infrared output to pass through. “Near infrared” is commonly defined as the region of the electromagnetic spectrum wavelength between 0.77 to 1.4 microns. The RG 780 Filter makes the emitter output invisible to a shooter, but provides near infrared for the CCD camera 12 to detect. The CCD camera is also optically filtered, preferably using an RT830 filter (or general equivalent) to optimize detection of near-infrared output. The filter blocks most of the visible wavelengths but passes near-infrared radiation that corresponds to the emitter output and is within the CCD's near-infrared response. The diameter of the infrared emitter 24 aperture is typically 50 millimeters, with smaller or larger sizes used for nearer or farther emitter positions.
  • The emitters used in the illustrated embodiment have angular output profiles that range from a 10-degree symmetrical cone to a 45-degree horizontal by 10-degree vertical compressed cone shape. Considering only the physical size of the emitter aperture, the calculated angular, size of the emitter presented to the CCD camera 12 varies as a function of distance from the camera. The emitter subtends a relatively small angle at the 50-meter range and the angle reduces linearly as the range increases. In the exemplary embodiment, 50 meters is the closest planned range to an emitter, while 1000 meters is the maximum range to an emitter although closer and further ranges are within the contemplation of the illustrated embodiment. In general, an emitter placed at a further range will present itself as a smaller object in the CCD camera image and would allow a more precise location to be derived from the emitter marking/location process. If an emitter subtends less than a pixel when imaged on the CCD camera, marking the emitter centroid will be difficult due to an inability to determine the emitter shape. However, the image of the emitter sensed by the CCD sensor is actually larger, due to a phenomenon known as blooming. Blooming will become important as the angular size of an emitter aperture falls below the angle covered by a pixel.
  • Because of blooming, the physical size of the emitter 24 aperture does not provide a direct relationship to angular size in the CCD camera 12 image. In the exemplary embodiment, high output emitters are used to provide sufficient near-infrared output to overcome ambient radiation from the sun. This high output increases blooming. During actual live fire testing, captured images of the emitter at various ranges show larger angular sizes depending on the ambient illumination level and the emitter range.
  • IR emitters 24 are commonly available along with power sources. A portable 12V power source may be used, for example a battery (not shown). The portable battery power source combined with the radio controlled emitter controllers, described hereinafter, advantageously provide for remote placement of the emitters within a live fire test range while maintaining communication with the system computer 26. As is well known, a DC generator or powered inverter may also used to power the emitters.
  • An emitter controller 22 controls each emitter 24. The emitter controller communicates with the computer using a broadcast radio antenna 32 and receiver antenna 36 although other types of communication links, for example microwave, wide area wireless network, telephonic network etc. are well known in the art.
  • Referring to FIG. 3, the emitter controller includes a Zworld BL1800 microprocessor 56. A 3.686 kHz resettable TX/RX clock 58 provides timing to the BL1800 microprocessor upon initiation of an enable signal from the microprocessor. A radio transceiver 60 connected to the microprocessor handles radio communications. The transceiver is a TEC T400 as is well known to those of skill in the art. The microprocessor sends a signal to the emitter power control circuit 64 to turn the emitter on and off. An ID select circuit 66 is provided to enable each emitter controller to be set to a unique ID, allowing radio broadcast commands to be addressed to specific controllers. A battery level detector circuit 68 is provided to monitor the battery charge state. Auxiliary circuits provide status LEDs and an override switch 70. The override switch is provided to allow a user at the emitter controller to cause the controller to power its attached emitter without recourse to radio commands, a feature useful during system setup or test.
  • In the exemplary embodiment, the emitters 24 are placed in relation to the firing range to provide a coordinate reference. In this arrangement the need to locate the IR's within close proximity to or at the fixed targets 18 is eliminated so long as the distance vector from the emitter to the target is known. The IR's are therefore generally located external to targets within the firing range but can also be located within the target boundary. The downrange placement of the IR's takes advantage of the high resolution of the camera 12 in order to render the optic and alignment errors negligible for determination of the aimpoint position vector. The Lens Mapping Procedure within the system computer software to be described hereinbelow is used to provide the microRadiansperPixel_H and microRadiansperPixel_V values associated with a particular CCD camera 12 and lens 44 combination that will be mounted to a projectile launcher 16 under test.
  • Referring to FIG. 4, the downrange controller 22 consists of a Zworld BL1800 microprocessor 80, a RIEGL LD90-3100-VLS-FLP rangefinder 82, a TEC T400 radio transceiver 96, and a custom circuit board containing an array of control and interface circuits. The transceiver controls radio communications. A 3.686 kHz resettable TX/RX clock circuit 86 provides timing for radio communications. The SIB and all DRC's are tied together through a hardwired dual-channel current loop and RS485 channel. The two channels of the current loop are used by the SIB to send time critical signals, while the RS485 serial channel is used for bulk data transfer. All hardwired connections to the DRC are optoisolated, protecting the DRC from potentially damaging transient voltages. The current loop channels are isolated by the custom isolation circuit 94. The RS485 serial channel is isolated by a B&B Electronics 485OPIN optoisolator 102. Either the SIB, through the current loop, or the DRC's internal microprocessor 80 may enable the 2 kHz Laser Trigger Clock 84. The Laser Trigger Clock in turn causes the range finder 82 to gather range data at 2 kHz. Data from the rangefinder is downloaded to the microprocessor for analysis and storage through a RS232 channel. An ID select circuit 100 provides each DRC with a unique address.
  • Referring again to FIG. 1, a downrange controller 25 is positioned at the end of each moving target's track 52 in order to monitor the location of the moving target 20. When enabled, the DRC's rangefinder takes 2000 range samples per second. The DRC's microprocessor averages groups of ten samples so that range data is recorded 200 times per second. Repetitive samples are discarded so that only motion is recorded. Each controller is provided with capacity to store up to 60 seconds of motion. The present embodiment has a capacity for 16 individually addressed downrange controllers, although it is within the contemplation of the present embodiment to include additional controllers. A DC supply, for example a battery (not shown), powers each controller 25, although it is within the contemplation of the invention to use other power sources including inverters, fuel cells and DC generators.
  • Referring again to FIG. 2 and FIG. 5, trigger event sensors 40 a, 40 b, 40 c, are used to initiate the aimpoint vector determination sequence. Three types of sensors are available for the initiation function, and include a microphone, a powered sensor, and a switch. Each trigger event sensor provides output to the system interface box SIB 28 and to the VCR 42.
  • The microphone 40 a generates a trigger event by detecting fire by listening for projectile launcher recoil and outputs an analog signal. The microphone is a standard commercial microphone having a sensitivity of 6 dB although higher and lower sensitivities will function in the embodiment. The microphone is mounted on, or near, the projectile launcher such that the diaphragm of the microphone responds to the initiating sound of the projectile launcher firing.
  • The powered sensor 40 b is any digital sensor that requires external power. The power sensor is provided with +5VDC from the system interface box (SIB) 28. The SIB generates a trigger event when the powered sensor digital output goes high. Powered sensors include Hall Effect sensors to detect the motion of some projectile launcher part or an accelerometer to detect the shock of the projectile launcher recoil. The powered sensor is mounted on the projectile launcher such that the sensor is responsive to the movement or acceleration of the sensed component.
  • The switch 40 c is a normally open switch that completes a circuit when closed. The SIB 28 generates a trigger event when the switch is closed or pressed. The switch is used to detect a preselected physical projectile launcher operation, such as a trigger pull or a button press, or may be a hand switch held by the operator, test director or system operator.
  • When the particular microphone 40 a, power sensor 40 b or switch 40 c is used, the associated output signal is sent to a glitch detector with lockout timer 110. The lockout timer provides a period after each trigger event during which further trigger events are ignored. The glitch detector output is sent to the Zworld BL 1800 microprocessor 112 within the SIB 28, alerting the microprocessor of the trigger event. The glitch detector output is also sent to a trigger tone generator 50. The trigger tone generator produces a short burst of audio line level tone that is recorded by the system video recorder 42, marking the trigger event on the video recording.
  • The BL 1800 microprocessor 112 of the SIB 28 serves as a general interface between the downrange and firing line hardware, including the emitter controllers 22 and downrange controllers 32, the trigger event sensors 40 a, 40 b, 40 c, the VCR 42 and the system computer 26. The TEC T400 radio transceiver 120 handles radio communications.
  • The SIB 28 communicates to the system computer 26 through three channels: RS232 serial, parallel digital, and RS485 serial. General communication between the SIB and the system computer are passed through the RS232 serial channel. The parallel digital signals are used to transmit time critical communication. The digital signals from the system computer are received by the SIB through a Data Translations STP68 interface board 124. Signal conditioning circuitry 122 turns two of the digital signals into the dual-channel current loop that joins the RS485 serial channel in hardwire linking the SIB and all of the DRC's. The current loops transmit time-critical events to the DRC's while the RS485 serial channel allows for bulk data transfer. A Keypad 114 and a Matrix Orbital LK2204-25 LCD display 116 allow the user to interact with the SIB, monitor its status, issue commands, and run test functions.
  • The system computer 26 is a typical commercially available PC capable of running software embodied on the computer media. In the exemplary embodiment the computer includes a Windows 2000™ or higher operating system. PC requirements in the exemplary embodiment are at least 128 MB of RAM and a 733 MHz Pentium III processor or equivalent although the system will run on other platforms. A CDROM drive or equivalent means, for example, an external memory device or Internet download capability is required for software installation. As is well known, the PC contains a memory device such as a hard disk, a microprocessor and input device and output devices. Preferably, the output devices include both a display screen and a printer. To accept the software and databases, the memory device, for example the hard disk, preferably should have a capacity of at least 10 GB free hard drive space for program operation and data storage. However, the memory requirement may vary and depends upon the expected size of the database. As is well known in the art, memory can be selected to match the database or increased by installing a higher capacity disk drive.
  • Software Control
  • The software for controlling the equipment that comprises the illustrative embodiment will now be described. It is to be appreciated that the software can be expressed in many forms by those skilled in the art and only the necessary functions will be described herein. The software provides control among downrange hardware, operation station hardware and firing line hardware.
  • SIB Program
  • Referring to FIG. 6, the compiled code in the Zworld BL 1800 microprocessor 112 (See FIG. 5) automatically initiates at system power up 150. At initialization 152 variables and arrays are created and the digital I/O system is initialized with outputs set to their default startup states. The main loop is started 154 and repeats until the system is powered down. The main and auxiliary trigger pulse digital inputs are polled 156 to check for hardware detection of projectile launcher events. The system computer is alerted 158 to a hardware detected projectile launcher event by raising the trigger flag. Then the main and auxiliary triggers reset digital inputs are polled 160 to determine if the system computer commands the main or auxiliary trigger flags to be lowered 162. The main or auxiliary trigger flags are lowered as commanded by the system computer. The RS232 serial buffer serving communications to the LCD display is polled 164 to determine if a keypad key press has been received. In the Handle Key press 166 step the program responds appropriately to the key press. The RS232 serial buffer serving communications to the system computer is then polled 168 to determine if a computer command has been received. In the ping steps 170, 172, the computer determines whether a ping is received. All commands from the system computer other than a ping are meant for the emitter controllers via the radio link and the command bytes are placed in the outgoing queue 174. In the bytes in outgoing queue 176, the program determines if there are any computer commands waiting to be sent to the emitter controllers via the radio link. The SIB radio transceiver's received signal strength indicator line is polled 178 to determine if another unit's transceiver is currently transmitting. Then, all computer command bytes in the outgoing queue are sent to the emitter controllers via the radio transceiver 180. The bytes from the outgoing queue are cleared after being sent 182. Next the SIB radio transceiver's received signal strength indicator is polled to determine if another unit's transceiver is currently transmitting 184. If another unit is transmitting, incoming radio messages are received 186 and the system responds as appropriate 188 to the received radio message. Finally, the main loop ends 190 and repeats in the start step 154.
  • Emitter Controller (EC) Program
  • Referring to FIG. 7, the emitter controller program will now be described. The compiled code resident in the emitter Zworld BL1800 microcontroller 56 (See FIG. 3) is initiated upon powerup 202. At initialization 204 variables and arrays are created and the digital I/O system is initialized with outputs set to their default startup states. The two digital inputs of the EC's 2-position DIP switch are polled to determine the controller's ID tens digit, and the four digital inputs of the 10-position rotary switch are polled to determine the controller's ID ones digit 206. The status LEDs are flashed to indicate the emitter identification number; the LEDs are cycled a number of times equal to the tens digit, then flashed a number of times equal to the ones digit 208.
  • The main loop starts 210 and repeats until power down. The program queries whether there are any bytes waiting to be sent to the system computer via radio link 212. In a responses allowed step 214, the EC settings are queried to determine if radio responses have been disabled by system computer command. Radio responses may be disabled to prevent confused radio traffic when many EC's are deployed. The program pauses for a duration determined by the emitter ID number 216, 218, 220. This pause will prevent the EC's from attempting to transmit simultaneously, as well as causing them to respond to broadcast commands in numerical order. The EC radio transceiver's received signal strength indicator line is polled to determine if another unit's transceiver is currently transmitting. Next, in the “Bytes in outgoing queue” step, the waiting messages in the outgoing queue are broadcast via the EC's radio transceiver 222. The outgoing queue is cleared 224 after the waiting messages are sent. The EC's set ID is checked 226. Then the EC determines its battery state 228 through two onboard voltage comparators, dividing the range of battery voltages into three categories: good, warn and critical. If manual override functionality has not been disabled by the system computer 230 the manual override switch state is polled and if “on” the manual override flag is set 232. The EC then polls the radio transceiver's RSSI line to determine if another unit's transceiver is currently transmitting 234. In steps 236 through 240 incoming bytes are received and the message transmitted by the bytes is handled. Appropriate responses, if any, are placed in the outgoing queue. The program polls whether the emitter manual override flag has been raised 242. If the flag is raised then the emitter is turned on 244 and the emitter status LED is flashed steadily 246, indicating that the emitter is on due to manual override, and the main loop ends. If the manual override flag is not raised, the program polls whether the emitter flag is raised 248. If the flag is raised then the emitter and the emitter status LEDs are turned on 250 and the main loop ends. If neither the manual override nor the emitter flag are raised, the emitter and emitter status LED are turned off 254, the emitter status LED turns off 256 and the main loop ends 258.
  • Downrange Controller Program
  • Referring to FIG. 8, on powerup 262 the compiled code resident in the downrange controller Zworld BL 1800 microcontroller 80 starts and runs. Variables, states and settings are initialized 264. The hardware ID is checked 266 by polling the four digital inputs of the DRC's 16 position rotary switch to determine the unit's ID number. The handle temp function 268 uses onboard sensors to determine the DRC's current internal temperature. The DRC's fan, defroster, and/or heater are then activated as needed to keep the unit's internal temperature within tolerance limits.
  • The main loop 270 is then started and repeated until shutdown. Once every 15 seconds a subset of commands within the main loop is performed 272 wherein the hardware ID lines are checked and the DRC ID number is ascertained 274, the handle temperature routine performed 276, and the check battery voltage routine is performed 278 thereby ending 280 the periodic check cycle. When this main loop periodic subset is completed, the main loop functions are performed.
  • Each pass 282, the main loop begins by checking the state of Trigger1 284. Trigger1 is a current loop passing through the SIB and all of the DRC's. When Trigger1 is active, all rangefinders will begin ranging and sending data to their microcontrollers. If Trigger1 is active, the DRC then determines if the system computer has enabled the DRC for recording 286. If the DRC is enabled for recording, the microcontroller will begin processing and recording the incoming data from the rangefinder 288. The rangefinder will continue to range until Trigger1 returns inactive. The microcontroller will continue to record data from the rangefinder until the rangefinder until no more data is forthcoming or a full sixty seconds of data. Once all data has been recorded the DRC will disable itself 292 for recording if the system computer has placed it in disable after record mode. Disable after record mode 292 prevents new data from being accidentally overwritten before the data can be downloaded to the system computer. If the DRC is in the disabled after record mode then no further rangefinder data will be recorded until the DRC is again enabled by a system computer command. If Trigger1 is not active but rangefinder data has been received 294 the data can be discarded. The RS485 serial buffer serving the communications with the system computer is then polled 298 to determine if commands have been received. If so, the microprocessor acts appropriately 300, 302. Any responses to the system computer are placed in the outgoing queue 304. The microprocessor next checks the outgoing queue. If the queue contains any bytes, they are sent to the system computer through the RS485 serial channel and the queue is cleared 306, 308. The RS232 serial buffer serving communications with the keypad/display unit is polled to determine if a keypad keypress has been received 310. Appropriate responses to any keypresses are generated 312 and the main loop ends 314.
  • System Computer Program
  • The system computer program comprises a main program and two threads, the frame grabber and the event detection thread. A process events process captures data for later analysis, but the process also executes in real time.
  • Referring to FIG. 9 a main program is illustrated at 300. The program starts 302 by initializing data structures 304 entered into databases 306, 308, 310. The data is collected from the on-site survey 312, the lens mapping procedure, 314 and from the test plan and ballistic models 316. After the data structures are initialized, the program initiates communication linkage and verification with the system hardware through wires and radios along with its internal communications 318. The event detection and frame grabber threads are initialized and placed in a suspended state 320. The computer display shows the main menu 322 whereby the operator selects either the capture or analyze modes 324. In the capture mode filenames are assigned and new data files are opened and initialized for the event 326. In the analyze mode the data is analyzed 336 a. The detection processing mode is selected for either live processing 330, 332, 334, 336 b or post system processing. 338, 340, 342. During Live Processing, data from a single event is captured and immediately analyzed at 336 b. For Post Processing, where a large amount of data is to be captured sequentially, the data are recorded and analyzed later via AnalyzeData( ) at 336 a. The System Interface Box (SIB) turns on the default emitter for the selected target after a target is selected 329 or 337. The event detection sequence initiates when an event is selected at 330 or 338 and the video recorder is started.
  • Referring to FIG. 9 b, the thread for monitoring event detection ports on the SIB hardware is shown at 350. Following initialization, the event detection circuits are reset 354 and the event counters are initialized 356. The event detection loop starts 360. The video recorder status is checked 362 and the detection thread is suspended if the video recorder is off 364. If the video recorder is operating, the video frames from the projectile launcher-mounted camera are recorded on the digital video recorder. If Live Processing mode was selected 366, then the video frames are also input into a computer memory buffer configured as a six second first in first out (FIFO) memory buffer 368. If an event has occurred 370 then the event detector circuits are reset 372 and event data are processed 374. If an event has not occurred, then the burst timer is checked 380 and the burst count is incremented if the burst timeout has passed 382. If not in the burst mode, or after events are processed 372, 374, the event detection loop is restarted 360.
  • Referring to FIG. 9 c, the process for analysis of event data is illustrated at 400. The process handles the selection, display, and marking of the emitter locations for event images stored in the FIFO buffer by the video frame grabber. User controls for displaying, marking, and saving marking data are provided 404. An event is selected from the available events 410. The data related to an event is used to find and display the video frame for the event 412. The operator verifies the event occurrence and emitter marking, and the marked event data is saved 414. The marked emitter data is used with the target and emitter coordinate data provided via the survey process to determine aimpoint performance 416. Aimpoint data for the target and event of interest is saved 418 and combined with the ballistics model to provide flyout of the round towards the target. The operator may then select another event to be analyzed, or return to the previous process 420, 422.
  • A projectile dynamics model as is well known in the art is included for both training and testing. The dynamics model calculates the fall of the shot. Such testing may include modifications to the projectile launcher site or triggering mechanisms. The projectile dynamics model may include aerodynamic drag effects as well as lift and gravity forces upon the projectile. Wind and other shocks encountered by the projectile are also included in the dynamics model. Such modeling is well known in the art including the aerodynamic effects of lift and drag caused by the exogenous aerodynamic forces.
  • In addition to the dynamics model, a projectile burst effect model, the development of which is well known in the art may be included. Burst effects are correlated to the actual projectile parameters, for example, the quantity of explosive, the fracture characteristics of the casing and the effects of proximity devices.
  • Setup and Operation
  • The operation of the invention will now be described.
  • Range Survey
  • First, a survey of the firing range is performed. The system uses survey equipment to measure the three-dimensional coordinates of the targets, reference emitters and shooting position. Since all of the subsequent calculations are based upon these measurements, the accuracy of these coordinates bounds the accuracy of the system.
  • The scenario objectives, whether testing or training, are factored into the design of the range. The location of the firing line is determined along with placement of downrange hardware consisting of rangefinders and IR's, thereby defining the range and available targets. The static emitters preferably should be placed within 20 mils of the targets but may also be placed further from the target with acceptable loss of accuracy. Next, a static target survey is performed in which range data for the shooter, static targets, and emitter three-dimensional coordinates within the live fire range are determined along with ground plane elevation at each target.
  • Equipment used to survey live fire test ranges typically includes theodolites, transits, laser range finders and global positioning systems. The exemplary embodiment uses a survey that incorporates a theodolite angular measurement device with a laser rangefinder to provide azimuth, elevation, and range to a retro-reflective marker with an option to calculate and output three-dimensional coordinates. Typically, measurements are referenced to geodetic coordinates. In the exemplary embodiment, a relative reference method is used by defining a local coordinate system origin (defined as (0, 0, 0)) that coincides with a predefined shooter position. The target and emitter coordinates are measured relative to the shooter position. Regardless of the measurement method, the desired results are coordinates reported in Northing, Easting, and Elevation coordinates (n, e, h).
  • Survey equipment, under ideal circumstances, can supply accuracies on the order of at least + or −1 mm. The uncertainty in the target and reference emitter position can increase to + or −0.1 meter (for each) due to additional errors that can occur during subsequent target and reference emitter placement. Preferably, the target and reference emitters will be positioned before the survey. The target and reference emitter coordinates will then be obtained by surveying to their centers. Even with the survey performed after target and emitter placement, the best achievable uncertainty in their placement is considered to be + or −0.01 meters (for each). Therefore, with respect to the total uncertainty error the survey contributes + or −0.01 meters for each of the shooter position, target position and emitter position.
  • Communication and control connections are made between the operation station (comprising the system computer 26 and the SIB 28) and the firing line and downrange hardware. Some connections are made through cables and other connections are made through radio means (including radio relays). After controls are established, the components are set to their specific addresses for communication with the SIB. In particular, the trigger events are established and each camera is associated with each projectile launcher. It is important that the location of each target and IR emitter within the range be precisely determined. With this range data determined the Input Files are input into the computer.
  • Surveyed Target and Emitter Coordinates
  • The targets and emitters are located on the firing range at specific northing, easting, and elevation coordinates provided by survey. The accuracy of the target and emitter positions depends on when the survey is preformed relative to placing the targets and emitters on the range. One method is to survey and mark the desired positions on the ground, followed by the later placement of the targets and emitters. Adding measured elevation offsets to the ground survey positions give the final coordinates of the targets and emitters. Using this method, estimated errors in the coordinates range from + or −50 mm to + or −100 mm.
  • In the exemplary embodiment, targets and emitters are located by securely positioning the targets on the firing range at the approximate ranges desired, and then surveying directly to the target and emitters. If surveying is performed after placement, the coordinates of the targets and emitters should fall within + or −10 mm or better. Recording measured offsets of the target centroids from the ground are used to evaluate projectile ground impact in the immediate area of the target.
  • During aimpoint analysis, when marking the location of the emitter relative to the pixel coordinates, the centroid of each emitter is located by visual interpolation. Marking precision is increased by allowing for sub-pixel marking via a zoom function (implemented in the system software). The zoom function magnifies the area of the image that contains the emitter signature. Sub-pixel marking precision is the inverse of the zoom factor chosen; a zoom factor of 4 provides sub-pixel precision of 0.25 pixels. Marking of the emitter centroid is performed by noting the horizontal and vertical dimensions of the emitter image in pixels. The location of the emitter centroid is determined by dividing these dimensions in half.
  • Correspondence between the Shooting Range and CCD Camera Image
  • Referring to FIG. 10, the geometry of the shooter, emitter and target measurement process is illustrated. The shooter, emitter, and target geometry forms a long, narrow triangle with the shooter at the apex and the emitter and target located downrange at the other triangle vertices. Equations describing the angles from the reference emitter to the target are shown along with the sight line-emitter angles with reference to the CCD camera image.
  • The shooting position will preferably be located beforehand by having the surveyor place a marker at the defined shooting position on the ground. The accuracy of the shooting position coordinates would follow the best-case coordinate accuracies of + or −10 mm.
  • With the shooter, emitter and target positions ascertained, the process for finding the sightline to target aimpoint angles is performed. The process consists of four basic steps: 1) Using the 3D coordinates of the shooter, emitter, and target, calculate the emitter and target angles (vertical and horizontal) relative to a line parallel to a reference (northing) axis. 2) Calculating the emitter to target angles by subtracting the emitter angles from the target angles. 3) Measuring the sightline to emitter angles by marking the emitter centroid on the CCD image captured when a shot is fired and correct for boresight angular errors. 4) Using the calculations made from survey coordinates and the calculations made via the CCD measurements for the particular shooter, target, and emitter combination, calculating the sightline to target angles by subtracting the emitter to target angles from the sightline to emitter angles.
  • Lens Mapping
  • The Lens Mapping Procedure will now be described. The lens mapping is preferably performed onsite after setup of the equipment to ensure that the lens mapping accurately reflects the camera and lens settings. During lens mapping, the location of an emitter in the camera image is used in finding the angular offset of the emitter from the camera's optical axis, and subsequently the angular offset from the projectile launcher's line of sight. In order to determine the angular offsets, camera and lens combinations are mapped to determine the angle represented by each pixel of the camera's imaging device. Locating the camera and lens at a known distance in millimeters from a target calibrated in millimeters, and capturing the resulting image comprise camera/lens mapping. The captured image (in digital format) is examined to determine the relationship between the pixels and the calibration target markings. The LMP is a function of the system computer software and is used to provide the microRadiansperPixel_H and microRadiansperPixel_V values associated with a particular CCD camera and lens combination that will be mounted to a projectile launcher under test during a scenario. The microradians per pixel across the field of view of the camera/lens combination are calculated from the relationship of the pixels to the linear target markings, and the known distance from the camera lens to the target. In the preferred embodiment a standard camera faceplate format of ½″ is typically used although ¼″, ⅓″ and ⅔″ formats as well as nonstandard formats are suitable.
  • To begin the lens mapping process, the particular camera and lens that are suitable for the field of view and range are selected for the aimpoint measurement task. The selection of a specific lens is dependent on both the camera selected (due to the camera image format used and also due to the lens mounting requirements) and on the focal length needed to provide a suitable field of view for the downrange targets. As an example and not by way of limitation, focal length lenses from f=24 mm to f=100 mm can be used in combination with CCD image formats of ½ in. and ⅓ in. When using the same focal length lenses for the ½ in. and ⅓ in. formats, the field of view (FOV) for the ⅓ in CCTV format is less than the FOV for the ½ in. CCTV format because of the difference in the physical size of the image of the image surface. However, since the number of pixels is the same for the two image formats (768h×494v), the ⅓ in. format provides a higher angular resolution at the expense of a smaller field of view. In the exemplary embodiment, the field coverage must provide for the camera's imaging of a target's reference emitter when the projectile launcher is sighted on the target. This requirement originates from the need to mark the reference emitter when a shot is fired at a target.
  • It is too be appreciated that each scenario will have its own particular requirements, and each scenario should be analyzed to determine what field coverage is required. In the exemplary embodiment, field coverage falls between 10 meters and 100 meters although larger fields are within the contemplation of the present invention. Smaller field coverages lead to higher system measurement resolution, but the emitters will need to be closer to the targets. Larger field coverages will reduce the measurement resolution, but will allow larger target to emitter separations and will help to ensure that the emitter will be visible when a shot is fired.
  • After the camera and lens are selected the camera is mounted to a tripod and placed near the planned shooter position. Three reference emitters are located at a distance from the camera that approximates the range of the planned scenario targets. The three reference emitters should be placed in a straight line, preferably parallel to the horizon. To align the reference emitters, the CCD camera image is referred to and the spacing is arranged to provide for all three emitters to be in the camera's FOV when the center of the camera's FOV is aligned to the left-most emitter.
  • For the horizontal lens mapping, the second emitter should be preferably placed at about 25% of the image width from the image center towards the right side of the image. The third emitter should be placed preferably at about 70% of this same width. This placement of the emitters will allow for the camera to be rotated 90 degrees counterclockwise and the same three emitters to be used when performing the vertical lens mapping. Using survey equipment, the 3D coordinates of the camera and the three emitters are measured and recorded. If desired, the camera can be set to be the origin with the reference axis defined by the line between the camera and the left-most emitter. With the camera mounted on a tripod, the camera is panned and tilted to visually attempt to align the center of the left-most emitter to be in the center of the camera image. It is to be appreciated that all three emitters are operating and are within the camera's horizontal FOV.
  • Once these steps are completed the camera and emitter 3D coordinates are calculated. Although the calculations can be performed manually, in the preferred embodiment the data is entered into a software program that is part of the system software. When the software program is being used the camera and emitter 3D coordinates are entered. After saving the coordinates, the camera image is brought up on the computer screen and the process of performing the fine camera alignment and marking the emitters for lens mapping is initiated. The mouse cursor is placed over the emitters and the (pix.x, pix.y) pixel coordinates are viewed on the screen. The cursor is placed over the Eo and the pixel coordinates are checked against the center of the CCD image coordinates. If the emitter is not a center, the camera is panned and tilted until Eo is at the pixel coordinates for the center of the image. Next the cursor is placed over the emitter E2H and checked if the pix.y value for E2H is the same as Eo. If the pix.y coordinates are not the same, the camera is rolled about its axis to make both Eo and E2H (and also E1H) have the same pix.y value.
  • For the vertical lens mapping, the camera is rotated 90 degrees counterclockwise and aligned visually so that the left-most emitter becomes the center emitter within the CCD image with the remaining emitters being within the camera's vertical FOV. The adjustment of the camera to properly image the three emitters for the vertical lens mapping is similar to the horizontal alignment, except that the pix.x coordinates of all three emitters should be the same value, which should be the pixel coordinate of the horizontal center of the image. The camera is panned, tilted, and rolled to perform the alignment and then mark the emitters in the same order. After the last emitter is marked for the vertical lens mapping, the horizontal and vertical tens mapping values will be displayed and an option for storing the values will be given. Lens map values are stored with a name that corresponds with the camera and lens combination used for later retrieval during the live fire aimpoint scenario.
  • The errors that can occur in the camera/lens mapping are: errors in the camera/lens distance to the target, errors in the target calibration, and errors in the reading of the pixel relationship to the target. In measurement of the camera and lens combination mapping function, standard deviation of the pixel in terms of angle was determined to be less than 0.15 milliradians (mrad). This angle is considered to be the lens mapping uncertainty contributing to total system error. The angular value is constant and does not change with range.
  • Camera Mounting and Boresight Calibration
  • The camera is mounted to the projectile launcher, preferably with a rigid mount providing a view of the emitters over the entire projectile launcher super-elevation range. The linear offsets (horizontal and vertical) of the camera aperture from the projectile launcher sight line are measured. The ideal alignment of the camera for measuring emitter angles would have the optical axis of the camera lens coaxial with the sightline of the projectile launcher, and the camera lens located at the surveyed shooter position coordinates. However, this is not practical for live fire since the camera must be mounted out of the way of the projectile path and any projectile launcher operations. The mounting requirements lead to coordinate offsets and angular deviations of the optical axis of the lens from the projectile launcher sightline, which are corrected by the boresight process.
  • As can be appreciated by those skilled in the art, the boresight process provides a set of boresight angles that are used to correct the subsequent measured emitter to sightline angles obtained during the firing event. When establishing a boresight to an emitter geometrical relationship at the boresight range, the boresight angles derived from the boresight emitter measurement includes angular deviations between the sightline and the camera optical axis as well as apparent angular deviations due to the coordinate offset from the sightline, assuming that the optical axis to sightline offsets are negligible. However, even small optical axis to sightline offsets will cause apparent angular deviations, and their effect should be considered.
  • It is to be appreciated that because the coordinate offsets are constant and the boresight emitter range can vary from the boresight target range, errors will occur in the aimpoint calculations as the boresight target range varies. The magnitude of these errors depends on the offsets and the range difference between the range to the target and the range to the emitter. Without correction, the aimpoint angles will only be correct when the target's emitter is at the boresight range.
  • To account for the error that will be introduced for firing at targets that are at different ranges than the boresight range, the portion of the boresight angles that is due to the sightline to camera offset is mathematically subtracted from the aimpoint calculations. This adjustment is done by calculating the angular deviations due to the sightline to camera offset at the boresight range, and then subtracting those angles from the angular deviations that are due to the camera lens offset at the target emitter range. This difference is than added to the boresight angles to produce the boresight correction angles for emitters at ranges other than the boresight range. As can be appreciated by those skilled in the art, there will still be some residual errors after this boresight correction process due to measurement uncertainty.
  • When the CCD camera is preferably rigidly mounted to the projectile launcher under test, it must be aligned to the projectile launcher sightline. Some adjustment capability is provided in the mount to adjust the optical axis of the camera lens to be approximately parallel to the projectile launcher sightline. If the optical axis is parallel, there will be a constant linear offset between the sightline and the optical axis, but no angular offset. However, the constant linear offset translates into an angular offset, which reduces linearly with increasing range. Since the mechanical alignment process is relatively coarse, and electronic alignment, or boresighting, is performed to find the residual error for removal during actual aimpoint error measurements.
  • Aiming the projectile launcher at the boresight target and emitter pair that is placed at a known range performs the boresight calibration. An expert gunner fires one or more shots (preferably three) at the boresight target and the boresight emitter centroid is marked on the CCD image. The shot group is analyzed to verify proper aiming by the expert gunner relative to the projectile launcher under test. If acceptable, the average of the shot grouping is used as the boresight correction values to account for the remaining angular offset of the optical axis to the sightline. As can be appreciated by one skilled in the art, the sight settings remain unchanged after the boresight process is completed.
  • The acceptability of the aimpoint errors is dependent on the projectile launcher under test. For example, an expert M16 gunner using the iron sights can aim at a well-marked target to within 0.5 milliradians (mrad), while sighting errors for an M203 Quad sight may be as high as 4 mrad.
  • For adjustable sight projectile launchers, for example OICW, OCSW, MK-19, M203, etc. the adjustable sights cause two additional concerns caused by the projectile launchers' significant changes in the sightline to barrel angle relative to changes in the target range. As is well known in the art, the adjustable sight design is used to produce super-elevation of the barrel to fly a projectile to a target. In some cases the elevation angle may be on the order of 36 degrees.
  • The change in the sightline to barrel angle produces a change in the camera optical axis (OA) to sightline (SL) angle and offset. The primary concerns are that since the camera is attached to the barrel, the camera will rotate with the barrel's super-elevation. Since the camera is attached to the barrel of the projectile launcher, the camera will rotate with the barrel's super-elevation. If the barrel is super-elevated more than a few degrees, the infrared reference emitters may be out of the camera's field of view (FOV) and the emitter cannot be marked. Second, for each sightline to barrel angle setting, the coordinate and angular offsets of the optical axis to the sightline will be different, requiring a separate compensation for each new sight setting.
  • It is within the contemplation of the present invention that the loss of emitter in the camera's FOV be accommodated by a mount that provides for indexed angular rotation of the camera to counteract the super-elevation of the barrel. A selection of indexed camera rotation angles can be used to keep the emitter within the FOV for a group of sight angle settings. The number of camera rotation settings depends on the range of the sight angles versus the FOV. In an alternate embodiment, emitters can be mounted at higher elevations on the range so that they are in the FOV when the projectile launcher is super-elevated.
  • When the geometry of the projectile launcher sightline versus sight adjustment is ascertained, it would be possible to boresight at an intermediate sight range, and then to add or subtract the angular changes in the sightline angles as the sight is indexed to setting for ranges other than the boresight range. These sightline angular changes would include both elevation and azimuth angles since adjustable sights also produce a horizontal change in the sightline angle to account for the increased effect of projectile precession with range. In addition to changing the OA to SL angles, there will also be a change in the optical axis to sightline offsets that will have to be taken into account.
  • In another embodiment for adjustable sight projectile launchers, a separate boresight for each indexed sight setting, with its corresponding indexed camera rotation, could be performed. In effect, each setting of the sight, along with any required rotation of the camera, would act as an individual fixed sight projectile launcher. In practice, a family of boresight and OA to SL offsets would need to be saved for each sight setting that is planned for use in an experiment or exercise.
  • For either of the hereinabove embodiments for adjustable sight use, the shooter would have to indicate the current indexed sight setting for the firing event. The software would then use the boresight and OA to SL offsets for that sight setting via either a mathematical sightline model or a sight setting lookup table. The result of using either of these two methods for adjustable sights in the software will result in not additional uncertainties in aimpoint measurements.
  • Moving Target Measurements
  • The goal of the moving target measurement system is to provide the 3D coordinates of the moving target centroid at the time that a shot is fired (to determine the projectile launcher sightline angles relative to the target center), and at the time of the terminal position of the fired round (where it intersects the target path plane, hits the ground, or air bursts), to differentiate the location of the moving target and the round at the time the round reaches its terminus, or intersects the target. The location of the moving targets' centroid, like the fixed targets' centroid is determined with reference to the infrared emitters.
  • Moving targets, along with associated emitters and laser rangefinders are established during setup. The laser rangefinders are aligned and the downrange controllers are setup. The laser rangefinder is zeroed at the moving target home or initial position. Then, the moving target and emitter 3D coordinates at the home (or initial) position and the end positions are obtained.
  • Referring to FIGS. 11 and 12, which depict the target path plane and identify the target angles from the shooter's position, the sightline offset angles from the target and the subsequent sightline angles, the method of calculating the sightline intersection with the target plane will now be explained. The 3D coordinates of the moving target and emitter at the time a shot is fired are found by measuring the target sled offset from the target sled home position via a rangefinder. This offset is used to produce a 3D offset relative to surveyed 3D coordinates of the target and emitter at the home position. The 3D offsets are added to the home position 3D coordinates to produce the target and emitter 3D coordinates within the test range coordinate system at the time of an event. These target and emitter 3D coordinates at shot fired are used, along with the CCD image pixel coordinates of the emitter, to produce the sightline offset angles relative to the target center. Once the target and emitter 3D coordinates within the test range are found, the calculation of the sightline offset angles follows the same procedure that is used for static targets described hereinabove.
  • The sightline offset angle is added to the horizontal and vertical angles of the target within the firing range coordinate system (derived from the target's surveyed 3D coordinates) in order to determine the horizontal and vertical angles of the sightline in the firing range coordinate system. After sightline orientation is found, the intersection of the sightline with the target path plane is calculated.
  • Once the drop of the round, which is calculated relative to the sightline, is provided by the ballistics model, the 3D coordinate of the fired round on the target path plane can be found by subtracting the drop from the sightline intersection with the target path plane.
  • The moving target setup will now be described. The target sled moves along a linear track under remote control. Preferably, motion is provided by an electric motor powered by 12 VDC batteries although pneumatic, hydraulic or other mechanical drivers that are well known in the art may provide motion. The target sled can move in either direction along the track and the target can be popped up or down. Remote control may be independent of the system computer. The target track, while not necessarily level, is assumed to be flat and straight such that it does not cause up and down or left and right deviations of the target from a straight line of more than 20 mm. While an assumption of 20 mm may be optimistic, this assumption allows the target path to be considered as following a straight line. When the 20 mm assumption does not hold, additional measurements of the vertical position of the target centroid as the target moves along the track can be used to increase the vertical accuracy of the target centroid calculations. The beginning and ending target sled positions (also referred to as “home” and “end”) are electronically referenced to insure that these positions are known and repeatable. Sensors are used on the track to detect the home and end reference positions of the target sled. The target and emitter centroid positions are measured in 3D coordinates (via survey to + or −10 mm) for the home and end positions of the target sled. These 3D target and emitter centroid home and end positions are defined as TB & TE, and EB & EE, respectively. The moving target setup procedure requires that the rangefinder SetRangefinderZero( ) command be sent at the time of surveying the target's home position 3D coordinates. Subsequent rangefinder readings of the target sled position via the rangefinder will provide the relative target sled offset from the target home position, TB. The 3D coordinates of TB and TE, are used to produce a 3D unit vector tBtE by the linear target offset from the home position of the track. Summing this offset with the 3D coordinates of the target home position TB, provides the 3D coordinates of the intermediate target position.
  • A system emitter is physically attached to the moving target sled, and the position of the emitter centroid relative to the target centroid is fixed. The 3D coordinates of the emitter centroid at an intermediate target position are calculated by summing TO with the 3D coordinates of the emitter at the home position, EB since the target and emitter are assumed to follow parallel paths). The 3D coordinates of the target and emitter centroid at the intermediate target sled position are used to find the shooter's aimpoint angles (εx, εy) relative to the target centroid. The aimpoint angles produced will be for the aimpoint at the point in time that corresponds to the video frame used to mark the emitter position. The target path plane, TPP, is defined as containing the TB and TE points and is constrained to be normal to the firing range n-e plane by defining a third point on the plane as TB′=TB+(0,0,1). These three points on the target path plane are used to produce the 3D equation of the target path plane, which will be used in finding the sightline intersection with the target path plane.
  • During an event, target linear offsets are captured continuously during target movement and referenced to a event start time. These offsets and time references are stored during the event and transferred to the system computer after the completion of the event. When analyzing the event via the recorded video frames, the point in time corresponding to the midpoint of the video frame selected for marking the emitter position, t@MarkedFrame, is used to retrieve the stored target linear offset for the marked frame. The point in time when the shot is fired, t@shotFired, is used to retrieve the target linear offset for when the shot was fired. The linear offset, doffset, of the target sled from the home position for a particular time is provided via an optical rangefinder that bounces an infrared beam off a retroreflector attached to the moving target sled. The offset is provided to an accuracy of + or −10 mm. This is equal to the measurement accuracy specification of the rangefinder for an average of twenty 2000 Hz measurements providing a 200 Hz sample output (5 nanosecond). The linear target sled offset, doffset is multiplied by the track unit vector tBtE, to arrive at the 3D coordinate offset of the target sled, TO, relative to the home position of the target sled. Adding TO TB, the 3D coordinate of the target centroid at the home position, produces the 3D position of the target centroid, T3D, within the firing range (T3D=TB+TO). Adding TO and EB, the 3D coordinate of the emitter centroid at the home position, produces the 3D position of the emitter centroid, E3D, within the firing range. (E3D=TO+EB). The location of the emitter in the CCD camera image will be used to determine the aimpoint angles (εx, εy) of the sightline relative to the moving target centroid at the t@MarkedFrame, time. These aimpoint angles for the moving target will be calculated using the same methods as for static targets, except for the fact that the target and emitter 3D coordinates, TMF and EMF, will be provided via calculations that use the rangefinder and survey data.
  • Referring again to FIG. 1 at shot fired time (SF) time the 3D coordinates of the target TSF will be found by retrieving the target linear offset from the rangefinder at the t@shotfired time. The aimpoint angles (εx, εy) will be summed with the angles to TSF eh) to produce the sightline angles, (Ee,Eh), of the sightline vector, pslI within the firing range relative to the surveyed shooter position P. A sightline unit vector is created by using the sightline angles (Ee,Eh) to find the direction of the sightline within the firing range. The sightline unit vector is defined as:
    psl I=(cos(E e)/|psl I|,sin(E e)/|psl I|,sin(E h)/|psl I)=(n psl ,e psl ,h psl)
  • Round intercept (RI) is next and the definition of RI depends on the type of round, either kinetic energy (KE) or high explosive (HE). For a KE round, the RI occurs when the round either passes through the target path plane or the round impacts the ground short of the target. For a HE projectile the RI occurs at whichever occurs first: the round impacts the target, the round impacts the ground, or, if fused, the fuse time expires. In any of the different cases, the sightline angles and the sightline intersection with the target path plane must be known in order to define the path of the round. If the round impacts the ground, or airbursts, the distance from the shooter must also be known to calculate the 3D coordinate. Additionally, the time-of-flight of the round at round intercept, TOFRI, must be known to determine the target sled offset and to find the target 3D coordinates, TRI, at the time of RI.
  • KE Rounds
  • For KE rounds, the ground range to the target path plane in the n-e plane of the firing range and the sightline elevation angle determines the time-of-flight of the round unless the round impacts the ground before the target path plane. In order to determine the ground range to the target path plane, the sightline vector intersection, SLI, to be described hereinbelow, with the plane of the moving path is calculated. The 3D coordinates of the shooter, P, and the sightline intersection, SLI, is used to find the slant range to the target plane, rTP. The rRI range is the corresponding ground range to the target path plane parallel to the n-e plane of the firing range. The range rRI is used to set the maximum range for the ballistics model to fly the round towards the target path plane. The ballistics model provides the time-of-flight to round impact, TOFRI, for the KE round in use.
  • The flight time for a KE round depends upon whether the round passes through the target path plane or impacts the ground before the target. Therefore, depending on the outcome, TOFRI=TOFTP (time of flight to the target path plane) or TOFRI=TOFGI (time of flight to ground impact). By setting the maximum range for the ballistics model to rRI, the maximum time of flight from the ballistics model will automatically be TOFRI. Using the ballistics of the KE round in use, the drop, relative to the sightline, of the round at the target path plane is calculated and used to calculate the 3D coordinate, RRI, of the round as it intersects the target path plane. For a KE round, no consideration is given to the 3D location of the round after it passes through the target path plane. For this case, RRI=SLI−(0,0,drop).
  • For the possibility of a KE round impacting the ground before the target, RGI is defined at the 3D coordinate where the ballistics calculation indicates that the round drops to the plane of the ground. For this possibility, the time-of-flight of the round stops at the point of ground impact and the time is labeled TOFGI. If the round impacts the ground, then TOFRI=TOFGI and RRI=RGI=(PSlE(n,e,0)*rGI)+P(n,e,0), which results in a 3D coordinate on the ground. The TOFRI will be used to determine the moving centroid coordinates, TRI, at the time that the round arrives downrange. This is accomplished by obtaining a target sled offset, dRI, via the rangefinder at the shot fired time plus the TOFRI and repeating the process for finding the 3D coordinate of the target centroid within the firing range at the round intercept time, TRI. Once TRI and RRI are known, the distance between the two 3D coordinates can be used to determine whether a hit or miss occurred, based upon the size of the target.
  • High Explosive (HE) Rounds
  • For HE rounds, the TOFRI time and the RRI coordinate calculations have additional considerations. Airburst HE rounds are fused for a particular range and the time-of-flight for an air burst is dependent on the fuse range and the speed of the round. Since an HE round is still considered active after it passes the target, the intersection of the round with the target path plane is only important if the round actually hits the target (in which case the round is assumed to have exploded). If the round passes through the target path plane and then air bursts or impacts the ground, a hit or miss calculation can be performed using the target and round coordinates to determine if the target was hit. In case a target hit did not occur, the air burst or ground impact coordinates are calculated and can be used to find the distance from RRI to the target at TRI. In addition, if the round impacts the ground before the target path plane, the 3D coordinate RGI is also needed to determine the distance from RRI to the target at TRI.
  • There are three possible time-of-flights to consider for HE rounds. The first is the fuse range related, TOFHE, the second is the ground impact, TOFGI, and the third is the target impact, TOFTP (note, TOFTP is the same as the KE round TOFTP). For each of these TOF's, there is a related 3D round terminus coordinate. The first step is to fly the round out using the ballistics model and the aimpoint angles derived from the emitter marking at shot fired. If the found is fused, then the round is only flown out to the fuse range. Then a check is made to determine if the round air burst or hit the ground. Then another check is made to determine if the rounds terminal range (for either air burst or ground impact) is before the target path plane or after the moving plane. If the terminal range is less than the range to the target path plane, the round has either hit the ground or airburst in front of the target path plane. No target impact is then possible. If the terminal range is greater than or equal to the range to the target path plane, then the round has either hit the target, hit the ground, or airburst. Since the determination of an air burst or ground impact has already been performed, the only possibility that needs to be checked is whether a target impact has occurred. The result is that the sightline intercept with the target path plane must be calculated to determine the TOFTP to the target path plane. This is the same calculation described hereinabove for the KE rounds.
  • If TOFHE is greater than or equal to TOFTP, then the 3D positions of the round at TOFTP, RTP, can be compared with the 3D position of the target at TOFTP, TTP, TTP. |RTPTTP| can be used to determine if a hit or miss occurred, where a hit would occur if the distance |RTPTTP| is less than target radius. If a hit occurred, then RRI is set to RTP and there is no airburst or ground impact after target path plane. If no target intersection occurs then the 3D position of the round at either airburst or ground impact, RRI is calculated and the distance |RTITTI| is calculated.
  • Sightline Intersection with the Target Path Plane
  • An important task in the moving target calculations is to find the intersection of the sightline vector with the target path plane. The sightline vector is defined by the shooter position P and the sightline angles (Ee,Eh) into the firing range (FIG. 11). The target path plane contains the target home and end positions, TB and TE, and is specified to be a vertically oriented plane parallel to the h-axis.
  • The first step in defining the target path plane is to add an additional point to the beginning and ending target coordinates TB and TE, to make up the required the required third point for a plane. This third point is created by adding a unit length vector parallel to the h-axis of the firing range coordinate system to the TB coordinate. This point is labeled as T′B and is equal to:
    T B(n TB ,e TB ,h TB)+(0,0,1)=T′ B(n TB′ ,e TB′ ,h TB′).
  • These three points, TB, TE, and T′B, define a plane that includes the path of the target along the track and is vertically oriented to be parallel to the elevation axis.
  • The equation of the target path plane is produced by using the coordinates of TB, TE, and T′B in calculating the coefficients for the general form of the equation of a plane, Ax+By+Cz+D=0. The A,B, and C coefficients are the components of a vector that is normal to the target path plane, TPPN. These coefficients are calculated by taking the cross product of the vectors TBTE and TBT′B on the target path plane, where:
    T B T E=(n BE ,e BE ,h BE)=((n E −n B),(e E −e B),(h E −h B)
    T B T′ B=(n BB′ ,e BB′ ,h BB′)=((n B′ −n B),(e B′ −e B),(h B′ −h B)=(0,0,1)
  • And the cross product result (A,B,C) simplifies to: A=eBE, B=−nBE, C=0.
  • The coefficient D is calculated by taking the negative determinant of the three co-planar points, TB TE and T′B, and setting the result to be equal to D.
  • Dividing these four coefficients by the length of the vector TPPN,
    |TPP N|=(A 2 +B 2 +C 2)1/2,
    produces the unit normal vector to the target path plane,
    tPP N=(A/|TPP N |,B/|TPP N C/|TPP N)=(a,b,c)
    and a vector magnitude:
    d=−D/|TPP N|,
  • That is the minimum distance from the origin to the plane. This unit normal vector tpPN and distance d will be used in finding the intersection of the sightline vector with the target path plane.
  • The sightline vector is defined by a starting point, the shooter position P, and set of easting and elevation angles (Ee,Eh), providing the projection direction into the firing range coordinates system from P. The general equation for the sightline vector is:
    PSL I =P+(r TP *psl I).
  • The direction vector of the sightline vector, defined hereinabove is:
    psl I=(cos(E e)/|psl I|,sin(E e)/|psl I|,sin(E h)/|psl I|)=(n psI ,e psl ,h psl) and,
    rTP is the unknown distance from the shooter P to the sightline vector intersection with the target path plane at SLI.
  • The sightline vector intersection, SLI, with the target path plane is found by determining the unknown distance rTP and then using rTP in the general equation for the sightline vector:
    r TP=−((tpp N *P)+d)/(tpp N *psl I),
    is used, where the numerator is the distance from the shooter at P to the point on the plane at the minimum distance from the origin and the denominator is the cosine of the angle between the target path plane unit vector, tppN, and the sightline unit vector, pslI. The result rTP, is the length of the vector PSLI, which is used to find the sightline intersection with the target path plane, SLI, via the equation:
    SL I =P+(r TP *psl I).
  • After the scenario has begun, a shooter will align the projectile launcher site with a target. The center of the lens image at the trigger event determines the reference for aimpoint. The infrared sensors illuminate at the trigger event to provide distance measurements from the aimpoint reference to the fixed IR emitter or the moving target position as determined by the downrange controller. With the information that is collected at the trigger event, the aimpoint and projectile launcher effects are calculated, preferably using the software resident in the system computer. Output from the sensors and the camera are integrated into the VCR with titling for event analysis. The information collected, including references and calculated aimpoint and projectile dynamics provide information for training and projectile launchers testing.
  • Error between the actual and predicted aimpoint result from uncertainties associated with the shooter, target, and emitter positions. The errors are range dependent and are minor contributors to the overall error. In addition error associated with the offset of the CCD camera from the projectile launcher sightline lens mapping and the emitter marking process can also produce minor errors and these errors can be minimized through techniques to be described hereinafter. Finally, the uncertainty associated with the bore-sight as performed by an expert gunner is the limiting factor in the cumulative error of the system.
  • While preferred embodiments have been shown and described, various modifications and substitutions may be made thereto without departing from the spirit and scope of the present invention. Accordingly, it is to be understood that the present invention has been described by way of illustrations and not limitation.
  • Any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. 112 paragraph 6. In particular, the use of “step of” in the claims herein is not intended to invoke the provisions of 35 U.S.C. 112 paragraph 6.

Claims (37)

1. A system for determining a location vector for an aimpoint relative to a target located within a firing range, comprising:
a. the target having a perimeter further defining a centroid for locating the target;
b. an near infrared emitter, placed within the firing range, the emitter being further placed outside the perimeter of the target and the emitter being associated with the centroid for determining the angular offset between the target and the emitter;
c. an emitter controller for wirelessly controlling the illumination sequence of the emitter;
d. a projectile launcher for testing;
e. a digital camera having a lens, the camera adjustably mounted on the projectile launcher for alignment of the camera optical axis with the projectile launcher site line, the camera field of view encompassing the emitter, and the camera being mapped to derive the angular offset between the image formed by the emitter emission from the target centroid position;
f. a sensor responsive to the projectile launcher for detecting triggering events when the projectile launcher is fired;
g. communication means for transferring information between the computer and each of the sensors, emitter, camera; and
h. a system computer for receiving inputs from the emitter, the camera and the sensor for controlling the emitter controller, and for determining the aimpoint vector at the triggering event.
2. The system of claim 1 wherein the sensor comprises a microphone.
3. The system of claim 1 wherein the camera includes of an array of photonic detectors sensitive to near infrared light.
4. The system of claim 3 wherein the camera further includes a filter for blocking non near infrared light and permitting near infrared light to pass through the filter.
5. The system of claim 1 wherein the near infrared emitter comprises a halogen lamp and a filter for blocking non near infrared light.
6. The system of claim 1 wherein the communication means comprises relay radios.
7. The system of claim 1 wherein the field of coverage is between 10 meters and 100.
8. The system of claim 1 wherein the emitter is located between 50 meters and 1000 meters from the camera lens plane.
9. The system of claim 1 wherein the computer includes a computer program for calculating the fall of the shot.
10. The system of claim 9 wherein the computer program calculates the aerodynamic effects of air velocity upon the fall of the shot.
11. The system of claim 9 wherein the computer includes a computer program for calculating the effects of the projectile launcher burst.
12. The system of claim 1 including a video recording means controlled by the system computer for capturing visual and projectile launcher event information.
13. The system of claim 12 wherein the visual and projectile launcher event information is captured within the audio portion of the recording;
14. The system of claim 12 wherein the video recording means comprises a digital videocassette recorder.
15. A system for determining a location vector for an aimpoint relative to a moving target located within a firing range, comprising:
a. the moving target having a perimeter further defining a centroid for locating the target;
b. an near infrared emitter, placed within the firing range, the emitter being further placed outside the perimeter of the moving target and the emitter being associated with the centroid for determining the angular offset between the target and the emitter;
c. an emitter controller for wirelessly controlling the illumination sequence of the emitter;
d. a projectile launcher for testing;
e. a digital camera having a lens, the camera adjustably mounted on the projectile launcher for alignment of the camera optical axis with the projectile launcher site line, the camera field of view encompassing the emitter, and the camera being mapped to derive the angular offset between the image formed by the emitter emission from the target centroid position;
f. a sensor responsive to the projectile launcher for detecting triggering events when the projectile launcher is fired;
g. a downrange controller for controlling the position of the moving target, wherein the computer is responsive to output from the downrange controller for determining the aimpoint vector at the triggering event;
h. communication means for transferring information between the computer and each of the sensors, emitter, camera; and
i. a system computer for receiving inputs from the emitter, the camera, the downrange controller and the sensor for controlling the emitter controller, and for determining the aimpoint vector at the triggering event.
16. The system of claim 15 wherein the moving target is movably mounted on tracks.
17. The system of claim 15 wherein the downrange controller is provided with capacity to store up to 60 seconds of motion.
18. The system of claim 15 wherein the sensor comprises a microphone.
19. The system of claim 15 wherein the camera includes of an array of photonic detectors sensitive to near infrared light.
20. The system of claim 19 wherein the camera further includes a filter for blocking non near infrared light and permitting near infrared light to pass through the filter.
21. The system of claim 15 wherein the near infrared emitter comprises a halogen lamp and a filter for blocking non near infrared light.
22. The system of claim 15 wherein the communication means comprises relay radios.
23. The system of claim 15 wherein the field of coverage is between 10 meters and 100.
24. The system of claim 15 wherein the emitter is located between 50 meters and 1000 meters from the camera lens plane.
25. The system of claim 15 wherein the computer includes a computer program for calculating the fall of the shot.
26. The system of claim 25 wherein the computer program calculates the aerodynamic effects of air velocity upon the fall of the shot.
27. The system of claim 25 wherein the computer includes a computer program for calculating the effects of the projectile launcher burst.
28. The system of claim 15 including a video recording means controlled by the system computer for capturing visual and projectile launcher event information.
29. The system of claim 28 wherein the visual and projectile launcher event information is captured within the audio portion of the recording;
30. The system of claim 28 wherein the video recording means comprises a digital videocassette recorder.
31. A method for determining projectile launcher aimpoint comprising:
a. selecting the location of a shooter at a firing line;
b. surveying a firing range for determining the location of target and emitter coordinates;
c. calculating the centroid of a target having a defined perimeter, the target being placed within the firing range;
d. placing emitters on the firing range externally to the target perimeter;
e. associating the target with the emitter
f. mounting a camera sensitive to near infrared light to the projectile launcher
g. calibrating the optical axis of a camera with the boresight of the projectile launcher;
h. mapping the camera with the target and emitter position
i. providing equipment means for controlling the emitter and for determining and recording the predicted aimpoint at a triggering event
j. determining the projectile launcher aimpoint at the triggering event
32. The method of claim 31 further comprising the following when the target is a moving target:
a. providing a downrange controller for controlling the position of the moving target, wherein the computer is responsive to output from the downrange controller for determining the aimpoint vector at the triggering event.
33. The method of claim 31 further comprising determining the fall of the shot.
34. The method of claim 33 further comprising determining projectile launcher burst effects.
35. The method of claim 31 further comprising recording the projectile launcher aimpoint determination data on the equipment means and providing performance feedback to personnel.
36. The method of claim 31 further comprising preparing an evaluation report for a projectile from data captured by the equipment means.
37. A system for determining the contemporaneous position of a moving target comprising, a movable platform, the target mounted on the platform, an emitter mounted on the platform, the emitter being in fixed relationship to the target, a reflector mounted on the platform the emitter being in fixed relationship to the emitter, a rangefinder and a downrange controller receiving range information from the rangefinder wherein the position of the reflector is determined by the rangefinder and the position of the target centroid being determined by the target position relative to the emitter.
US11/398,400 2006-04-05 2006-04-05 Projectile targeting analysis Abandoned US20070238073A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/398,400 US20070238073A1 (en) 2006-04-05 2006-04-05 Projectile targeting analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/398,400 US20070238073A1 (en) 2006-04-05 2006-04-05 Projectile targeting analysis

Publications (1)

Publication Number Publication Date
US20070238073A1 true US20070238073A1 (en) 2007-10-11

Family

ID=38575732

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/398,400 Abandoned US20070238073A1 (en) 2006-04-05 2006-04-05 Projectile targeting analysis

Country Status (1)

Country Link
US (1) US20070238073A1 (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060283069A1 (en) * 2003-03-18 2006-12-21 Nygaard Anton M Targeting a hunting or sports weapon
US20070166669A1 (en) * 2005-12-19 2007-07-19 Raydon Corporation Perspective tracking system
US20090253103A1 (en) * 2008-03-25 2009-10-08 Hogan Jr Richard Russell Devices, systems and methods for firearms training, simulation and operations
US20100003642A1 (en) * 2008-06-30 2010-01-07 Saab Ab Evaluating system and method for shooting training
US20100118407A1 (en) * 2008-11-10 2010-05-13 Corporation For National Research Initiatives Method of reflecting impinging electromagnetic radiation and limiting heating caused by absorbed electromagnetic radiation using engineered surfaces on macro-scale objects
US20110003270A1 (en) * 2007-08-17 2011-01-06 Jehan Jr Henry I In breech training device
US20110003269A1 (en) * 2007-06-11 2011-01-06 Rocco Portoghese Infrared aimpoint detection system
US7926408B1 (en) * 2005-11-28 2011-04-19 Metadigm Llc Velocity, internal ballistics and external ballistics detection and control for projectile devices and a reduction in device related pollution
US20110179689A1 (en) * 2008-07-29 2011-07-28 Honeywell International, Inc Boresighting and pointing accuracy determination of gun systems
US20120208150A1 (en) * 2009-08-24 2012-08-16 Daniel Spychaiski Radio controlled combat training device and method of using the same
US8621774B1 (en) 2004-03-29 2014-01-07 Metadigm Llc Firearm with multiple targeting laser diodes
US20140065578A1 (en) * 2011-12-13 2014-03-06 Joon-Ho Lee Airburst simulation system and method of simulation for airburst
US9470485B1 (en) 2004-03-29 2016-10-18 Victor B. Kley Molded plastic cartridge with extended flash tube, sub-sonic cartridges, and user identification for firearms and site sensing fire control
US9830408B1 (en) * 2012-11-29 2017-11-28 The United States Of America As Represented By The Secretary Of The Army System and method for evaluating the performance of a weapon system
EP2795891B1 (en) * 2011-12-23 2018-01-24 H4 Engineering, Inc. A portable system for high quality automated video recording
US9921017B1 (en) 2013-03-15 2018-03-20 Victor B. Kley User identification for weapons and site sensing fire control
US20180202777A1 (en) * 2017-01-13 2018-07-19 Action Target Inc. Software and sensor system for controlling range equipment
US10077969B1 (en) * 2017-11-28 2018-09-18 Modular High-End Ltd. Firearm training system
US20190078855A1 (en) * 2017-09-13 2019-03-14 Rory Berger Variable Velocity Ballistic
US20190120578A1 (en) * 2017-09-13 2019-04-25 Rory Berger Launcher with Internal Variable Velocity Valve System
US10288380B1 (en) * 2018-07-30 2019-05-14 Sig Sauer, Inc. Energy transfer indicator
US20190244391A1 (en) * 2016-10-20 2019-08-08 Spookfish Innovations Pty Ltd An aerial camera boresight calibration system
US10876818B2 (en) * 2017-11-28 2020-12-29 Modular High-End Ltd. Firearm training systems and methods
US20210010782A1 (en) * 2017-09-15 2021-01-14 Tactacam LLC Weapon sighted camera system
CN115289908A (en) * 2022-06-07 2022-11-04 西北工业大学 Method and device for guiding air defense missile introduction section through remote control instruction

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3907433A (en) * 1972-11-03 1975-09-23 Jacques Nault Moving target firing simulator and a method of adjustment of said simulator
US4340370A (en) * 1980-09-08 1982-07-20 Marshall Albert H Linear motion and pop-up target training system
US4439156A (en) * 1982-01-11 1984-03-27 The United States Of America As Represented By The Secretary Of The Navy Anti-armor weapons trainer
US4464115A (en) * 1981-12-23 1984-08-07 Detras Training Aids Limited Pulsed laser range finder training or test device
US4488876A (en) * 1982-03-26 1984-12-18 The United States Of America As Represented By The Secretary Of The Navy Aimpoint processor for quantizing target data
US4804325A (en) * 1986-05-15 1989-02-14 Spartanics, Ltd. Weapon training simulator system
US4835621A (en) * 1987-11-04 1989-05-30 Black John W Gun mounted video camera
US4923402A (en) * 1988-11-25 1990-05-08 The United States Of America As Represented By The Secretary Of The Navy Marksmanship expert trainer
US4955812A (en) * 1988-08-04 1990-09-11 Hill Banford R Video target training apparatus for marksmen, and method
US5213503A (en) * 1991-11-05 1993-05-25 The United States Of America As Represented By The Secretary Of The Navy Team trainer
US5215465A (en) * 1991-11-05 1993-06-01 The United States Of America As Represented By The Secretary Of The Navy Infrared spot tracker
US5289993A (en) * 1991-08-30 1994-03-01 Mcwilliams Joel K Method and apparatus for tracking an aimpoint with arbitrary subimages
US5320358A (en) * 1993-04-27 1994-06-14 Rpb, Inc. Shooting game having programmable targets and course for use therewith
US5577733A (en) * 1994-04-08 1996-11-26 Downing; Dennis L. Targeting system
US5686690A (en) * 1992-12-02 1997-11-11 Computing Devices Canada Ltd. Weapon aiming system
US5929444A (en) * 1995-01-31 1999-07-27 Hewlett-Packard Company Aiming device using radiated energy
US5991043A (en) * 1996-01-08 1999-11-23 Tommy Anderson Impact position marker for ordinary or simulated shooting
US6283756B1 (en) * 2000-01-20 2001-09-04 The B.F. Goodrich Company Maneuver training system using global positioning satellites, RF transceiver, and laser-based rangefinder and warning receiver
US6616452B2 (en) * 2000-06-09 2003-09-09 Beamhit, Llc Firearm laser training system and method facilitating firearm training with various targets and visual feedback of simulated projectile impact locations
US20030195046A1 (en) * 2000-05-24 2003-10-16 Bartsch Friedrich Karl John Target shooting scoring and timing system
US7158167B1 (en) * 1997-08-05 2007-01-02 Mitsubishi Electric Research Laboratories, Inc. Video recording device for a targetable weapon
US7345265B2 (en) * 2004-07-15 2008-03-18 Cubic Corporation Enhancement of aimpoint in simulated training systems

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3907433A (en) * 1972-11-03 1975-09-23 Jacques Nault Moving target firing simulator and a method of adjustment of said simulator
US4340370A (en) * 1980-09-08 1982-07-20 Marshall Albert H Linear motion and pop-up target training system
US4464115A (en) * 1981-12-23 1984-08-07 Detras Training Aids Limited Pulsed laser range finder training or test device
US4439156A (en) * 1982-01-11 1984-03-27 The United States Of America As Represented By The Secretary Of The Navy Anti-armor weapons trainer
US4488876A (en) * 1982-03-26 1984-12-18 The United States Of America As Represented By The Secretary Of The Navy Aimpoint processor for quantizing target data
US4804325A (en) * 1986-05-15 1989-02-14 Spartanics, Ltd. Weapon training simulator system
US4835621A (en) * 1987-11-04 1989-05-30 Black John W Gun mounted video camera
US4955812A (en) * 1988-08-04 1990-09-11 Hill Banford R Video target training apparatus for marksmen, and method
US4923402A (en) * 1988-11-25 1990-05-08 The United States Of America As Represented By The Secretary Of The Navy Marksmanship expert trainer
US5289993A (en) * 1991-08-30 1994-03-01 Mcwilliams Joel K Method and apparatus for tracking an aimpoint with arbitrary subimages
US5215465A (en) * 1991-11-05 1993-06-01 The United States Of America As Represented By The Secretary Of The Navy Infrared spot tracker
US5213503A (en) * 1991-11-05 1993-05-25 The United States Of America As Represented By The Secretary Of The Navy Team trainer
US5686690A (en) * 1992-12-02 1997-11-11 Computing Devices Canada Ltd. Weapon aiming system
US5320358A (en) * 1993-04-27 1994-06-14 Rpb, Inc. Shooting game having programmable targets and course for use therewith
US5577733A (en) * 1994-04-08 1996-11-26 Downing; Dennis L. Targeting system
US5929444A (en) * 1995-01-31 1999-07-27 Hewlett-Packard Company Aiming device using radiated energy
US5991043A (en) * 1996-01-08 1999-11-23 Tommy Anderson Impact position marker for ordinary or simulated shooting
US7158167B1 (en) * 1997-08-05 2007-01-02 Mitsubishi Electric Research Laboratories, Inc. Video recording device for a targetable weapon
US6283756B1 (en) * 2000-01-20 2001-09-04 The B.F. Goodrich Company Maneuver training system using global positioning satellites, RF transceiver, and laser-based rangefinder and warning receiver
US20030195046A1 (en) * 2000-05-24 2003-10-16 Bartsch Friedrich Karl John Target shooting scoring and timing system
US6616452B2 (en) * 2000-06-09 2003-09-09 Beamhit, Llc Firearm laser training system and method facilitating firearm training with various targets and visual feedback of simulated projectile impact locations
US7345265B2 (en) * 2004-07-15 2008-03-18 Cubic Corporation Enhancement of aimpoint in simulated training systems

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060283069A1 (en) * 2003-03-18 2006-12-21 Nygaard Anton M Targeting a hunting or sports weapon
US9891030B1 (en) 2004-03-29 2018-02-13 Victor B. Kley Molded plastic cartridge with extended flash tube, sub-sonic cartridges, and user identification for firearms and site sensing fire control
US9470485B1 (en) 2004-03-29 2016-10-18 Victor B. Kley Molded plastic cartridge with extended flash tube, sub-sonic cartridges, and user identification for firearms and site sensing fire control
US8621774B1 (en) 2004-03-29 2014-01-07 Metadigm Llc Firearm with multiple targeting laser diodes
US7926408B1 (en) * 2005-11-28 2011-04-19 Metadigm Llc Velocity, internal ballistics and external ballistics detection and control for projectile devices and a reduction in device related pollution
US20070166669A1 (en) * 2005-12-19 2007-07-19 Raydon Corporation Perspective tracking system
US9671876B2 (en) * 2005-12-19 2017-06-06 Raydon Corporation Perspective tracking system
US20150355730A1 (en) * 2005-12-19 2015-12-10 Raydon Corporation Perspective tracking system
US9052161B2 (en) * 2005-12-19 2015-06-09 Raydon Corporation Perspective tracking system
US8100694B2 (en) * 2007-06-11 2012-01-24 The United States Of America As Represented By The Secretary Of The Navy Infrared aimpoint detection system
US20110003269A1 (en) * 2007-06-11 2011-01-06 Rocco Portoghese Infrared aimpoint detection system
US20110003270A1 (en) * 2007-08-17 2011-01-06 Jehan Jr Henry I In breech training device
US20090253103A1 (en) * 2008-03-25 2009-10-08 Hogan Jr Richard Russell Devices, systems and methods for firearms training, simulation and operations
US8827706B2 (en) 2008-03-25 2014-09-09 Practical Air Rifle Training Systems, LLC Devices, systems and methods for firearms training, simulation and operations
US20100003642A1 (en) * 2008-06-30 2010-01-07 Saab Ab Evaluating system and method for shooting training
US8876533B2 (en) * 2008-06-30 2014-11-04 Saab Ab Evaluating system and method for shooting training
US20110179689A1 (en) * 2008-07-29 2011-07-28 Honeywell International, Inc Boresighting and pointing accuracy determination of gun systems
US8006427B2 (en) * 2008-07-29 2011-08-30 Honeywell International Inc. Boresighting and pointing accuracy determination of gun systems
US8270081B2 (en) * 2008-11-10 2012-09-18 Corporation For National Research Initiatives Method of reflecting impinging electromagnetic radiation and limiting heating caused by absorbed electromagnetic radiation using engineered surfaces on macro-scale objects
US20100118407A1 (en) * 2008-11-10 2010-05-13 Corporation For National Research Initiatives Method of reflecting impinging electromagnetic radiation and limiting heating caused by absorbed electromagnetic radiation using engineered surfaces on macro-scale objects
US8655257B2 (en) * 2009-08-24 2014-02-18 Daniel Spychaiski Radio controlled combat training device and method of using the same
US20120208150A1 (en) * 2009-08-24 2012-08-16 Daniel Spychaiski Radio controlled combat training device and method of using the same
US8986010B2 (en) * 2011-12-13 2015-03-24 Agency For Defense Development Airburst simulation system and method of simulation for airburst
US20140065578A1 (en) * 2011-12-13 2014-03-06 Joon-Ho Lee Airburst simulation system and method of simulation for airburst
EP2795891B1 (en) * 2011-12-23 2018-01-24 H4 Engineering, Inc. A portable system for high quality automated video recording
US9830408B1 (en) * 2012-11-29 2017-11-28 The United States Of America As Represented By The Secretary Of The Army System and method for evaluating the performance of a weapon system
US9921017B1 (en) 2013-03-15 2018-03-20 Victor B. Kley User identification for weapons and site sensing fire control
US20190244391A1 (en) * 2016-10-20 2019-08-08 Spookfish Innovations Pty Ltd An aerial camera boresight calibration system
US20180202777A1 (en) * 2017-01-13 2018-07-19 Action Target Inc. Software and sensor system for controlling range equipment
US10876821B2 (en) * 2017-01-13 2020-12-29 Action Target Inc. Software and sensor system for controlling range equipment
US20190078855A1 (en) * 2017-09-13 2019-03-14 Rory Berger Variable Velocity Ballistic
US20190120578A1 (en) * 2017-09-13 2019-04-25 Rory Berger Launcher with Internal Variable Velocity Valve System
US11473875B2 (en) * 2017-09-15 2022-10-18 Tactacam LLC Weapon sighted camera system
US20230037723A1 (en) * 2017-09-15 2023-02-09 Tactacam LLC Weapon sighted camera system
US20210010782A1 (en) * 2017-09-15 2021-01-14 Tactacam LLC Weapon sighted camera system
US10077969B1 (en) * 2017-11-28 2018-09-18 Modular High-End Ltd. Firearm training system
WO2019106556A1 (en) * 2017-11-28 2019-06-06 Modular High-End Ltd Firearm training system
US10670373B2 (en) 2017-11-28 2020-06-02 Modular High-End Ltd. Firearm training system
US10876818B2 (en) * 2017-11-28 2020-12-29 Modular High-End Ltd. Firearm training systems and methods
US10288380B1 (en) * 2018-07-30 2019-05-14 Sig Sauer, Inc. Energy transfer indicator
US10942008B2 (en) 2018-07-30 2021-03-09 Sig Sauer, Inc. Energy transfer indicator in a digital reticle
US10591255B2 (en) 2018-07-30 2020-03-17 Sig Sauer, Inc. Energy transfer indicator in a digital reticle
CN115289908A (en) * 2022-06-07 2022-11-04 西北工业大学 Method and device for guiding air defense missile introduction section through remote control instruction

Similar Documents

Publication Publication Date Title
US20070238073A1 (en) Projectile targeting analysis
EP1774250B1 (en) Electronic sight for firearm, and method of operating same
EP0873492B1 (en) Impact position marker for ordinary or simulated shooting
US9897415B2 (en) Infrared-light and low-light two-phase fusion night-vision sighting device
KR100963681B1 (en) Remote gunshot system and method to observed target
US9285189B1 (en) Integrated electronic sight and method for calibrating the reticle thereof
KR101211100B1 (en) Fire simulation system using leading fire and LASER shooting device
US7810273B2 (en) Firearm sight having two parallel video cameras
CN101512282B (en) Ballistic ranging methods and portable systems for inclined shooting
US20120178053A1 (en) Sniper training system
US9766042B2 (en) Integrated precise photoelectric sighting system
US9689644B1 (en) Photoelectric sighting device capable of performing 3D positioning and display of target object
CN103759598B (en) A kind of controlled infrared electro detection target assembly and detection method
US20120274922A1 (en) Lidar methods and apparatus
US9897416B2 (en) Photoelectric sighting device
US11002512B2 (en) Firearm marksmanship system with chamber insert
US9410769B1 (en) Integrated precise photoelectric sighting system
JPH0124275B2 (en)
US6973865B1 (en) Dynamic pointing accuracy evaluation system and method used with a gun that fires a projectile under control of an automated fire control system
KR20160127350A (en) Optical device utilizing ballistic zoom and methods for sighting a target
US4777861A (en) Missile aiming sight
RU2403526C2 (en) System for aiming firing from shelter
RU114768U1 (en) ARROW SIMULATOR AND OPTICAL-ELECTRONIC DEVICE TO IT (OPTIONS)
KR101815678B1 (en) Armament system interworking with image device and method for operating the same
CN115984369A (en) Shooting aiming track acquisition method based on gun posture detection

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOVERMMENT OF THE UNITED STATES, SECRETARY OF THE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PORTOGHESE, ROCCO;PURVIS, EDWARD JOHN;HEBB, RICHARD CHRISTOPHER;REEL/FRAME:017781/0977;SIGNING DATES FROM 20060329 TO 20060401

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION