WO2011066164A1 - Optical impact control system - Google Patents

Optical impact control system Download PDF

Info

Publication number
WO2011066164A1
WO2011066164A1 PCT/US2010/057167 US2010057167W WO2011066164A1 WO 2011066164 A1 WO2011066164 A1 WO 2011066164A1 US 2010057167 W US2010057167 W US 2010057167W WO 2011066164 A1 WO2011066164 A1 WO 2011066164A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
target
aperture
optical
photodetector
Prior art date
Application number
PCT/US2010/057167
Other languages
French (fr)
Inventor
Sergey Sandomirsky
Vladimir Esterkin
Thomas Forrester
Tomasz Jannson
Andrew Kostrzewski
Alexander Naumov
Naibing Ma
Sookwang Ro
Paul Shnitser
Original Assignee
Physical Optics Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Physical Optics Corporation filed Critical Physical Optics Corporation
Publication of WO2011066164A1 publication Critical patent/WO2011066164A1/en

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F42AMMUNITION; BLASTING
    • F42CAMMUNITION FUZES; ARMING OR SAFETY MEANS THEREFOR
    • F42C13/00Proximity fuzes; Fuzes for remote detonation
    • F42C13/02Proximity fuzes; Fuzes for remote detonation operated by intensity of light or similar radiation
    • F42C13/023Proximity fuzes; Fuzes for remote detonation operated by intensity of light or similar radiation using active distance measurement

Definitions

  • the present invention relates generally to optical detection devices, and more
  • some embodiments relate to optical impact systems with optical countermeasure resistance.
  • the limitation in the performance range of non-lethal weapon systems is generally associated with the kinetic energy of the bullet or projectile at the impact.
  • the initial projectile velocity must be high - otherwise the projectile trajectory will be influenced by wind, atmospheric turbulence, or the target may move during projectile travel time.
  • the large initial velocity determines the kinetic energy of a bullet at the target impact. This energy is usually sufficient to penetrate a human tissue or to cause large blunt trauma, thus making the weapon system lethal.
  • a trigger device that activates the mechanism that reduces the projectile kinetic energy.
  • it can be a timer that activates this mechanism at a predetermined moment after a shot.
  • More complex devices involve various types of range finders that measure the distance to a target.
  • Such range finder can be installed on the shotgun or launcher and can transmit the information about a target range to projectile before a shot.
  • Such type of weapon may be a lethal to bystanders in front of the target who intercept the projectile trajectory after the real target range has been transmitted to the projectile.
  • Weapon systems that carry a rangefinder or proximity sensor on the projectile are preferable because they are safer and better protected from such occasional events.
  • range finders or proximity sensors used in bombs, projectiles, or missiles.
  • Passive (capacitive or inductive) proximity sensors react to the variation of the electromagnetic field around the projectile when target appears at a certain distance from a sensor. This distance is very short (several feet, usually) so they have a short time for the slowdown mechanism to reduce projectile's kinetic energy before it hits the target.
  • Active sensors use acoustic, radio frequency, or light emission to detect a target. Acoustics sensors require relatively large emitting aperture that is not available on a small-caliber projectiles. A small emission aperture also causes spread of radio waves into large angle so any object located aside of a projectile trajectory can trigger a slow-down mechanism thus leaving a target intact.
  • light emission even from a small aperture available on small-caliber projectiles may be made of small divergence so only objects along the projectile trajectory are illuminated.
  • the light reflected from these objects is used in optical range finders or proximity sensors to trigger a slow-down mechanism.
  • a capability to burst the round at a predefined distance from the target would greatly increase the effectiveness of the round.
  • the Marine Corps plans to fire these smart munitions from current legacy systems (the M32 multishot and M203 under-barrel launcher) and the anticipated XM320 single-shot launcher.
  • an optical impact system is attached to fired munitions.
  • the optical impact system controls munitions termination through sensing proximity to a target and preventing effects of countermeasures on false munitions termination.
  • Embodiments can be implemented on in a variety of munitions such as small and mid caliber that can be applicable in non-lethal weapons and in weapons of high lethality with airburst capability for example and in guided air-to-ground and cruise missiles.
  • Embodiments can improve accuracy, reliability and lethality of munitions depending on its designation without modification in a weapon itself and make the weapon resistant to optical countermeasures.
  • Figure 1 illustrates a first embodiment of the present invention.
  • Figure 2 illustrates a particular embodiment of the invention in assembled and exploded views.
  • Figure 3 is a schematic diagram illustrating two different configurations of light source optics using a laser source implemented in accordance with embodiments of the invention.
  • Figure 4 is a diagram illustrating three different detector types, implemented in accordance with embodiments of the invention.
  • Figure 5 is a schematic diagram illustrating two different configurations of the detector optics implemented in accordance with embodiments of the invention.
  • Figure 6 illustrates the operation of a splitting mechanism according to an embodiment of the invention.
  • Figure 7 illustrates an embodiment of the invention implemented in conjunction with medium caliber projectiles with airburst capabilities.
  • Figure 8 illustrates a schematic diagram of electronic circuitry of implemented in accordance with an embodiment of the invention.
  • Figure 9 illustrates a further embodiment of the invention.
  • FIG. 10 illustrates an optical impact system with anti countermeasure functionality implemented in accordance with an embodiment of the invention.
  • Figure 1 1 illustrates the geometry of an edge emitting laser.
  • Figure 12 illustrates an optical triangulation geometry.
  • Figure 13 illustrates use of source contour imaging (SCI) to find the center of gravity of a laser source's strip transversal dimension, implemented in accordance with an embodiment of the invention.
  • SCI source contour imaging
  • Figure 14 illustrates an imaging lens geometry.
  • Figure 15 illustrates a method of detecting target size implemented in accordance with an embodiment of the invention.
  • Figure 16 illustrates an embodiment of the invention utilizing vignetting for determining if a target is within a predetermined distance range.
  • Figure 17 illustrates a lensless light source for use in an optical proximity sensor implemented in accordance with an embodiment of the invention.
  • Figure 18 illustrates a dual lens geometry
  • Figure 19 illustrates two detector geometries for use with reflection filters implemented in accordance with embodiments of the invention.
  • Figure 20 illustrates a laser diode array having a spatial signature implemented in accordance with an embodiment of the invention.
  • Figure 21 illustrates a laser diode mask for implementing a spatial signature in accordance with an embodiment of the invention.
  • Figure 22 illustrates a laser light signal with pulse length modulation implemented in accordance with an embodiment of the invention.
  • Figure 23 illustrates a novelty filtering operation for edge detection implemented in accordance with an embodiment of the invention.
  • figure 24 illustrates multi-wavelength light source and detection implemented in accordance with an embodiment of the invention.
  • Figure 25 illustrates a method of pulse detection using thresholding implemented in accordance with an embodiment of the invention.
  • Figure 26 illustrates a method of pulse detection using low pass filtering and thresholding implemented in accordance with an embodiment of the invention.
  • Figure 27 illustrates a multi-wavelength variable pulse coding operation implemented in accordance with an embodiment of the invention.
  • FIG. 28 illustrates an energy harvesting sub-system implemented in accordance with an embodiment of the invention.
  • FIG 29 illustrates an optical impact profile during target detection in accordance with an embodiment of the invention.
  • the figures are not intended to be exhaustive or to limit the invention to the precise form disclosed. It should be understood that the invention can be practiced with modification and alteration, and that the invention be limited only by the claims and the equivalents thereof.
  • An embodiment of the present invention is an optical impact system installed on a plurality of projectiles of various calibers from 12-gauge shotgun rounds through medium caliber grenades to guided missiles with medium or large initial (muzzle) velocity that can detonate high explosive payloads at an optimal distance from a target in airburst configuration or can reduce the projectile's kinetic energy before hitting a target located at any (both small and large) range from a launcher or a gun.
  • optical impact system comprises a plurality laser light sources operating at orthogonal optical wavelengths and signal analysis electronics minimizes effects of laser countermeasures to reduce false fire probability.
  • the optical impact system may be used in non-lethal munitions or in munitions with enhanced lethality.
  • the optical impact system may include a projectile body, which it is mounted on, a plurality of laser transmitters and photodetectors implementing a principle of optical triangulation, a deceleration mechanism (for non-lethal embodiments) that is activated by an optical trajectory, an expelling charge with a fuse also activated by an optical impact system, and a projectile payload.
  • a projectile body which it is mounted on, a plurality of laser transmitters and photodetectors implementing a principle of optical triangulation, a deceleration mechanism (for non-lethal embodiments) that is activated by an optical trajectory, an expelling charge with a fuse also activated by an optical impact system, and a projectile payload.
  • the optical impact system is comprised of two separate parts of the approximately equal mass.
  • One of these parts includes a light source comprised of a laser diode and collimating optics that direct a light emitted by a laser diode parallel to the projectile axes.
  • the second part includes receiving optics and a photodetector located in a focal plane of the receiving optics while being displaced at a predetermined distance from the optical axis of the receiving optics.
  • Both parts of the optical impact system are connected to an electric circuit that contains a miniature power supply (battery) activated by an inertial switch during a launch, a pulse generator to send light pulses with a high repetition rate and to detect the reflected from a target light synchronously with the emitted pulses; and a comparator that activates a deceleration mechanism and a fuse when the amplitude of the reflected light exceeds the established threshold.
  • a spring or explosive between sensor parts separates the parts after they are discharged from the projectile.
  • the optical impact system is disposed in an ogive of an airburst round.
  • the optical impact system comprises of a laser diode with a collimating optics disposed along the central axes of a projectile and an array of photodetectors arranged in an axial symmetric pattern around the laser diode.
  • an optical impact system When any light reflecting object intersects the projectile trajectory within a certain predetermined distance in front of the projectile, an optical impact system generates a signal to the deceleration mechanism and to the fuse.
  • the fuse ignites the expelling charge that forces both parts of the proximity sensor to expel from a projectile.
  • the recoil from the sensor expel reduces the momentum of the remaining projectile and reduces its kinetic energy so more compact deceleration mechanism can be used to further reduce the- projectile kinetic energy to a non-lethal level.
  • the sensor expel also cleans the path to the projectile payload to hit a target. Without a restraint from a projectile body, springs initially located between two parts of a sensor force their separation such that each of them receives a momentum in the direction perpendicular to the projectile trajectory to avoid striking the target with the sensor parts.
  • the deceleration mechanism needs a certain time for the reduction of the kinetic energy of the remaining part of projectile to the safe level. The time available for this process depends on the distance at which a target can be detected.
  • an increase in detecting range at a given pulse energy available from a laser diode is achieved by using a special orientation of the laser diode wi th its p-n- junction being perpendicular to the plane where both the receiver and the emitter are located.
  • the light is emitted from a p-n junction that usually has a thickness of approximately 1 ⁇ and its width is several micrometers. After passing the collimating length, the light beam has an elliptical shape with the long axes being in the plane perpendicular to the p-n junction plane.
  • the light reflected from a diffuse target is picked-up by a receiving lens, which creates an elliptical image of the illuminated target area in the focal plan.
  • the long axis of this spot is perpendicular to the plane where a light emitter and a photodetector are located.
  • the movement of the projectile towards the target causes displacement of the spot in the focal plane.
  • a photocurrent is generated and compared with a threshold value.
  • the photocurrent will reach the threshold level faster with the spot oriented as described above so the sensor performance range can be larger and the time available for the deceleration mechanism to reduce the projectile velocity is larger thus enhancing security of the non-lethal munitions usage.
  • an anti-countermeasure functionality of optical impact system is implemented to reduce a probability of false fire which can be caused by laser countermeasure transmitting at the same wavelength as an optical impact system and with the same modulation frequency.
  • the anti-countermeasure embodiment of an optical impact system uses a plurality of light sources transmitting at different wavelengths and signal analysis electronics generates an output fire trigger signal only if reflected signal in both wavelengths with modulation frequency identical to the transmitting light will be detected. There is a low probability that a
  • FIG. 1 illustrates a first embodiment of the present invention.
  • the sensor 126 is designed to focus light on a surface, collect and focus the reflected light, and detect the reflected light.
  • a sensor 126 includes a light source, such as a laxer diode 105.
  • the laser diode 105 may comprise a vertical cavity surface emitting (VCSEL) laser diode, or an edge-emitting laser diode such as a separate-confinement hetero structure (SCH) laser diode.
  • the components of the sensor 126 are located in the main housing 132.
  • the laser housing 101 Within the main housing 132 are the laser housing 101 and detector housing 1 18.
  • the laser housing 101 contains the collimating optics 103 and laser diode 105.
  • the collimating optics 103 may comprise a spherical or cylindrical lens.
  • the detector housing 1 18 contains the focusing lens 108 and detector 1 10.
  • the focusing lens 108 may be a spherical or cylindrical lens.
  • the main housing is insertable into a cartridge housing 133 to attach to the projectile.
  • the sensor 126 also includes an optical projection system configured such that the light from the laser diode 105 is substantially in focus within a predetermined distance range.
  • the optical projection system comprises collimating lens 108 which intercepts the diverging beam (for example, beam 327 of Figure 3) coming from the laser diode 105 and produces a collimated beam (for example, beam 328 of Figure 3) to the illumination spot of the target surface (for example, target 339 of Figure 3).
  • a collimated beam provides a more uniform light spot across a distance range compared to a beam focused to a particular focal point.
  • the projection system may include converging lenses, including cylindrical lenses, focused such that the beam is substantially in focus within the predetermined distance range.
  • the image plane may be at a point within the predetermined distance range, such that at the beginning of the predetermined distance range, the beam is suitably in focus for detection.
  • the operating power of the laser can be increased. This can be achieved while still maintaining low power consumption by modulating the laser diode 105.
  • power the laser diode 105 in pulsed mode operation, as opposed to continuous wave (CW) drive also allows higher power output.
  • the detection range of the sensor is inherently limited due to the field-of-view of the receiving optics 108 and its ability to collect and focus the reflected light to the detector 1 10.
  • the distance range that prompts activation of the fuze may be tailored according to these parameters.
  • any object is introduced into the path of the laser beam spot (for example, beam 328 of Figure 3)
  • light is reflected from its surface.
  • An optical imaging system for example including an aperture and receiving lens 108 collects the reflected light and produces a converging beam (for example, beam 331 of Figure 3) to the detector 1 10.
  • the detector 1 10 comprises only a single pixel, non position-sensitive detector (PSD).
  • PSD position-sensitive detector
  • FIG. 2 illustrates a particular embodiment of the invention in assembled and exploded views.
  • the illustrated embodiment may be used as an ultra-compact general purpose proximity sensor 227.
  • the sensor 227 is designed to focus light on a surface, collect and focus the reflected light, and detect the reflected light.
  • the sensor 227 consists of two separable sections; the laser housing 201 and the detector housing 218.
  • the laser housing 201 has a mounting hold 202 in which the collimating optics 203, laser holder 204, laser diode 205, and laser holder clamp 206 are inserted.
  • a PCB 214 mounts directly to the back of the laser housing 201 and contains a socket 217 from which the pins of the laser diode 205 protrude.
  • the detector housing 218 has a mounting hole 219 in which the lens holder 207, focusing lens 208, lens holder clamp 209, photodetector IC 210, photodetector IC holder 21 1 , and several screws 212, 213, 215, 220, 221 , 222, 223.
  • a battery compartment (not shown) may be positioned anterior to the housings 201 and 208 to power the system.
  • Figure 3 is a schematic diagram illustrating two different configurations of light source optics using a laser source implemented in accordance with embodiments of the invention. In the first configuration 339, the laser 305 emits a beam 327. A circular lens 340 collects laser beam 327 and creates an expanded beam 341.
  • a cylindrical lens 342 collects the expanded beam 341 and creates a collimated beam 328.
  • the laser beam 327 from the laser 305 is collected by a holographic light shaping diffuser 344, which produces a collimated beam 328.
  • Figure 4 is a diagram illustrating three different detector types, implemented in accordance with embodiments of the invention.
  • the first type is a non position-sensitive detector (PSD) 445, which has a single-pixel 446 as the active region.
  • PSD position-sensitive detector
  • the second detector type shown is a single-pixel PSD 447. Though only a single-pixel 448, its active area is
  • This single-pixel PSD 447 generates a photocurrent from the received light spot from which its position can be calculated relative to the total active area.
  • the third detector type shown is a single-row, multi-pixel PSD 449, which is also capable of detecting in one dimension.
  • the active area 450 is implemented as a single row of multiple pixels. With detector 449 position may be determined according to which pixels of the array are illuminated.
  • Figure 5 is a schematic diagram illustrating two different configurations of the detector optics implemented in accordance with embodiments of the invention.
  • the reflected beam 530 enters the focusing lens 508 from an angle.
  • the detector 510 is shifted perpendicularly from the optical axis 552 of the focusing lens 508.
  • the second configuration 553 only the reflected beam 530 enters the microchannel structure 555, while stray light 554 will be blocked.
  • Figure 6 illustrates the operation of a splitting mechanism according to an embodiment of the invention.
  • an explosive charge 605 ejects the laser housing 602 and the detector housing 603 from the cartridge 601. In some embodiments, this also assists in slowing the projectile.
  • springs 604 separate the laser housing 602 and the detector housing 603, thereby clearing the projectile's trajectory.
  • an explosive charge may be used to separate housings 602 and 603.
  • Figure 7 illustrates an embodiment of the invention implemented in conjunction with medium caliber projectiles with airburst capabilities.
  • the illustrated embodiment comprises a compact proximity sensor attached to an ogive 704 of a medium caliber projectile.
  • the laser diode 701 emits a modulated laser beam oriented along the longitudinal axes of the projectile and which is collimated by a collimating lens 702.
  • Photodetectors 708 are arranged in an axial symmetrical pattern around the laser diode 701.
  • Optical arrangement of a focusing lens 709 and a photodetector 708 produces an output electrical signal 712 from a photodetector only if a reflecting target 705 or 713 is located in front of the projectile at a distance less than a predefined standoff range, ⁇ target 714 located at a distance longer than a standoff range does not produce an output electrical signal 1712.
  • An array of axial symmetrical detectors makes target detection more reliable and enhances detector sensitivity.
  • Output analog electrical signals from each photodetector 708 are gated in accordance with the laser modulation frequency and then, instead of immediate thresholding, they are transmitted to electronic circuitry 710 for summation.
  • Summation of signals increases the signal to noise ratio. After summation the integrated signal is thresholded and delivered to a safe & arm 71 1 device of the projectile initiating its airburst detonation.
  • FIG. 8 illustrates a schematic diagram of electronic circuitry of implemented in accordance with an embodiment of the invention.
  • an accelerometer 816 initiates operation of a signal generator inside a microcontroller 817, which produces identical driving signals 818 to start and drive a laser driver 820 and gaiting electronics 821 of a photodetector.
  • An optical receiver 821 receives the light signal reflected from a target surface 805 and generates an output analog electrical signal, which is gated 822 and detected synchronously with a laser diode 801 operation. Gated signals are conditioned 823 and summated in a microcontroller 817.
  • the output threshold signal 824 releases the safe & arm device of the projectile, which initiates a projectile explosive detonation.
  • a power conditioning unit 815 supplies with electrical power a laser driver 820, microcontroller 817 and an
  • FIG. 9 illustrates a further embodiment of the invention.
  • the optical impact system 902, 903, 904 and 905 in the illustrated embodiment is attached to a missile projectile 901.
  • the air-to-ground guided missile approaches to a target 908, 909 under variable angle.
  • the missile trajectory is stable (not spinning).
  • the optical impact system has a down looking configuration enabling it to identify the appearance of a target at a predefined distance and trigger a missile warhead detonation in an optimal proximity to the target.
  • a laser transmitter 903 of an optical impact system transmits modulated light 906, 910 toward a potential target 908, 909.
  • the light reflected from a target depending on a distance to the target can either impact 907 the photodetector 904 or miss 91 1 the photodetector.
  • Control electronics 905 for driving and modulation of laser light and for synchronous detection of a reflected light is disposed inside the optical impact system housing 902.
  • FIG. 10 illustrates an optical impact system with anti countermeasure functionality implemented in accordance with an embodiment of the invention.
  • Optical impact system anti countermeasure functionality can be implemented by a plurality of laser sources 1001 , 1002 operating in different wavelengths.
  • the laser sources are controlled by an electronic driver 1003 which provides amplitude modulation of each laser source and controls synchronous operation of a photodetector 1005.
  • the plurality of laser beams at a plurality of wavelengths is combined into a single optical path 1013 using time domain multiplexer and a beam combiner 1004.
  • the light reflected from a target 1016 located at a predefined distance contains all transmitted wavelengths 1014.
  • a receiving tract comprising a photodetector 1005, comparator 1006, demultiplexer 1008 and signal analysis electronics 1009 and 1010 for each plurality of input signals.
  • Electronic AND logic circuit 101 1 will generate output trigger signal 1012 only if valid signal will be presented in each of wavelengths channels.
  • Laser countermeasure 1015 will operate with high probability at a single wavelength and will deliver a signal to AND logic only in one channel thus output trigger signal will not be generated.
  • Figure 1 1 illustrates the geometry of an edge emitting laser.
  • the light from the laser source is projected onto a target and imaged at a photodetector.
  • SCI Source Contour Imaging
  • a laser source 1 101 has a thickness Au, 1 102, which will be used in calculations herein.
  • the source strip parameters are controlled for optical triangulation (OT) which is applied for SCI sensing.
  • OT-principle is based on finding location for center of gravity of the source strip, by two-lens system.
  • both lenses are applied for imaging of ID-dimension; thus, both are cylindrical with lens curvature in the same plane which is also the plane perpendicular to the sources strip.
  • Figure 12 illustrates an optical triangulation geometry. Knowing one side (FG) 1202 and two close angles ( ⁇ 1203, ⁇ 1201)of the triangle FEG 1205, as in Figure 12, we can find all remaining elements of the triangle, such as sides a 1207 and b 1206, and its height EH 1208.
  • Point G 1204 is known (it is the center of the laser source), and angle ⁇ 0 1201 , is known (it is the source's beam direction).
  • SCI Source Contour Image
  • Figure 13 illustrates use of source contour imaging (SCI) to find the center of gravity of a laser source's strip transversal dimension, implemented in accordance with an embodiment of the invention.
  • SCI source contour imaging
  • a laser source disposed in a sensor body projects a laser beam 1310 to a target 131 1.
  • the target 131 1 is assumed to be a partially Lambertian surface, for example, a 10% Lambertian surface.
  • a reflected beam 1312 is reflected from the target 131 1 and detected at the detector 1312.
  • the source strip 1301, with center of gravity, G 1302, and size, ⁇ 1303, is collimated by lens 1 (LI) 1304, with focal length, f j 1305, and size, D, while imaging lens (L2) 1306 has dimensions f 2 1307, and D 2 , respectively.
  • imaging lens (L2) 1306 has dimensions f 2 1307, and D 2 , respectively.
  • these parameters may vary.
  • the 2 nd lens may be larger to accommodate larger linear pixel area).
  • the size of the source beam at distance-f is, according to Figure 13: ⁇ ⁇ / ⁇ An f
  • Au/2f
  • a typical, easy-to- fabricate (low cost) lens usually has f# > 2.
  • f# 2 cm
  • Au 50 ⁇
  • Eq. (3) can become: (5) where the 2 nd term does not depend on source's size. This term determines the size of the source's image spot on the target, and accordingly contributes to the power output required of the laser. In order to reduce this term, some embodiments use reduced lens sizes.
  • the distance to the target 1307, E is predetermined according to the concept of operation (CONOPS), and f#- parameter defines how easy is to produce the lens and will also be typically fixed. Accordingly, the f-parameter frequently has the most latitude for modification. For example, reducing focal length by 2-times, the 2 nd factor will be reduced 4-times, to 2.5 mm, vs. 2.5 cm value of the 1 st term.
  • the size of source contour image (SCI), Aw 1308, is where / is a correction factor, which, in good approximation, assuming angle ACB 1313 close to 90°, is equal to: ⁇ ⁇ (7) cos(a + p)
  • Eq. (6) is based on a number of approximations which are well satisfied in the case of low- resolution imaging such as the SCI.
  • AB 1314 is a part of Lambertian surface of the target 131 1 , which means that each point of an AB-area reflects spherical waves (not shown) as a response to a collimated incident beam 1310 produced by source 1301 with center of gravity G 1302, and strip's size ⁇ u 1303.
  • Figure 14 illustrates an imaging lens geometry.
  • area CB 1313 indeed images (approximately) into an area about Aw's size 1308,
  • simple imaging lens 1403 geometry as in Figure 14, where the x parameter 1401 is an object point 1402 (P) plane's distance from a lens, while y 1404 is its image (Q) 1405 plane's distance from lens 1403.
  • the image sharpness is determined according to the de-focusing distance, d 1406, and de-focusing spot, g 1407, with respect to focal plane.
  • the lens image equation is
  • Figure 15 illustrates a method of detecting target size implemented in accordance with an embodiment of the invention.
  • Figure 15 uses the same basic geometry and symbols as Figure 13, for the sake of clarity.
  • Points G 1501 and F 1502 are centers of lenses LI 1507 and L2 1508, respectively, and vector v 1503 represents the velocity of missile 1509 in the vicinity of the target 1510.
  • vector v 1503 represents the velocity of missile 1509 in the vicinity of the target 1510.
  • missile 1509 traverses distance vAt.
  • the angles a and ⁇ are equivalent to those in Figure 13.
  • Angles ⁇ and ⁇ 0 are equivalent to those in Figure
  • Distance £ 1505 is within the predetermined distance range for triggering the missile 1509 to explode.
  • distance / 1505 may be an optimal predetermined target distance
  • the predetermined distance range may be a range around distance / 1065 where target sensing is possible.
  • the target 1510 become initially detectable. This allows detection of the target 1510 through a As-target area 1506, during time, At 1504.
  • the detection system can determine that the detected target has at least one dimension greater than or equal to 1.8 m size.
  • This provide a counter-countermeasure (CCM) against obstacles smaller than 1.5 m.
  • CCM counter-countermeasure
  • the distance As 1506 may be increased by positioning the major axis in the plane of Figure 15.
  • the photodetector comprises a quadratic pixel array.
  • control logic is provided in the detection system to automatically select the (virtual) linear pixel array with minimum size.
  • a plurality of photodetectors is positioned radially around the detector system, for example as described in Figure 7, In these embodiments, control logic may be configured to select the sensor which is located most closely to the plane of Figure 15 for target detection.
  • Figure 16 illustrates an embodiment of the invention utilizing vignetting for determining if a target is within a predetermined distance range.
  • optical proximity sensor 1600 emits a light beam 1606 from a light source 1601.
  • the sensor 1600 is coupled to a projectile that is moving towards a target. In the sensor's frame of reference, this results in the target moving towards the sensor 1600 with velocity v 1613.
  • the target moves from a first position 1612, to a second position 161 1 , to a third position 1610.
  • the sensor 1600 include a detector 1604.
  • the detector 1604 comprises a photodetector 1603 positioned behind an aperture 1614.
  • lenses are foregone, and target imaging proceeds with vignetting or shadowing, alone.
  • the target is at the third position 1610 at distance h 3 from the sensor 1600
  • the reflected light beam 1607 strikes a wall 1602 of the detector 1604 rather than the photodetector 1603.
  • the entire reflected beam 1609 from the first target position 1612 impinges the photodetector 1603.
  • the beam will impinge the photodetector 1063, until the beam no longer impinges the photodetector 1603 (for example, at position 1610).
  • the beam will partially impinge on the photodetector 1603. The beam will then traverse the detector until it fully strikes the
  • the signal from the photodetector will first rise, then plateau, then begin to fall.
  • the specific detonation distance within this range is chosen when the signal begins to fall, or has fallen to some predetermined level (for example, 50% of maximum). Accordingly, the time in which the signal increases and plateaus may be used for target verification, while still supporting a relatively precise targeting distance for detonation.
  • Figure 17 illustrates a lensless light source for use in an optical proximity sensor implemented in accordance with an embodiment of the invention.
  • the light source 1700 can also be vignetted.
  • s (32)
  • the light source may be imaged directly onto the target area.
  • a Lambertian target surface backscatters the source beam into detector area where a second imaging system is provided, resulting in dual imaging, or cascade imaging.
  • Figure 18 illustrates variables of a lens system for quantitative analysis purposes.
  • the viewing beam imaging can be provided with single-lens or dual-lens system.
  • 40 ⁇ which is very small value for precise adjustment.
  • the positioning requirements can be made less demanding by utilizing a dual-lens imaging system .
  • Figure 18 illustrates a dual lens geometry.
  • Two convex lenses, 1801 and 1802 are provided for source (viewing) beam imaging, with focal lengths f j and f 2 , including imaging equation for 1 st lens (x j , y ] ; f[) and imaging equation for the 2 nd lens (x 2 , y 2 , f ? ).
  • a point source, O is included, for simplicity, with its image, O'.
  • the source is placed at the front of the 1 st focus, F ⁇ , with ⁇ ] distance from the focal plane.
  • the lens curvature radius, R is larger than the half of the lens size, D; R > D/2.
  • Interference filters especially reflective ones, have higher filtering power (i.e., high rejection of unwanted spectrum while high acceptance of source spectrum) at the expense of angular wavelength dispersion.
  • absorption filters have lower filtering power while avoiding angular wavelength dispersion.
  • Dispersive devices such as gratings are based on grating wavelength dispersion. Among them, volume (Bragg) holographic gratings have the advantage of selecting only one diffraction first order (instead of two, as in the case of thin gratings); thus, increasing filtering power by at least a factor of two.
  • Reflection interference filters have higher filtering power than transmission ones due to the fact that it is easier to reflect a narrower spectrum than a broader one.
  • a Lippmann reflection filter comprises a plurality of interference layers that are parallel to the surface.
  • Such filter can be made either holographically (in which case, the refractive index modulation is sinusoidal), or by thin-film-coating (in which case, the retractive index modulation is quadratic).
  • Figure 19 illustrates two detector geometries for use with reflection filters implemented in accordance with embodiments of the invention.
  • an aperture is formed in a detector housing 1903.
  • imaging is based on vignetting entirely.
  • lens or mirror based imaging systems may be combined with the aperture.
  • the detector is configured to receive a beam 1910 reflected from a target.
  • a reflective filter 1905 is configured to reflect only wavelengths near the wavelength or wavelengths of the laser light source or sources used in the proximity detector. Accordingly filter 1905 filters out likely spurious light sources, reducing the probability of a false alarm.
  • Filter 1905 is configured to reflect light at an angle to detector 1907. For example, such non-Lippman slanted filters may be produced using holographic techniques.
  • a Lippman filter 1906 is disposed at an angle with respect to the aperture, allowing beam 1909 to be filtered and reflected to detector 1908 as illustrated.
  • optical signals can be significantly distorted, attenuated, scattered, or disrupted by harsh environmental conditions such as: rain, snow, fog, smog, high temperature gradient, humidity, water droplets, aerosol droplets, etc.
  • optical window transparency can be significantly reduced due to dirt, water particles, fatty acids, etc. In some embodiments, the use of a hygroscopic window material protects against the latter factor.
  • high conversion efficiency can be obtained using VCSEL-arrays.
  • the VCSEL arrays may be arranged in a spatial signature pattern, further increasing resistance to false alarms.
  • Figure 20 illustrates a VCSEL 2000 array arranged in a "T"-shaped distribution. Arranging the laser diodes into a desired spatial distribution avoids signature masks which would block some illumination; thus, reducing optical power, or effective conversion efficiency, T
  • %ff 1 l l - ⁇ 2 (49) where rj
  • beam focusing lens source geometries such as projection imaging and detection imaging, as discussed above, provide further protection from beam attenuation.
  • system magnification M defined by Eq. (41) is reduced by by increasing/ ] -value.
  • horizontal dimension is increased by using mirrors or prisms to provide a periscopic system.
  • High temperature gradient can cause strong material expansion; thus, reducing mechanical stability of optical system.
  • the effects of temperature gradients are reduced.
  • the temperature gradient, ⁇ , between T j -temperature at high altitudes (e.g., -10°C), and T 2 -temperature of air due to air friction against missile body (e.g., +80°C) creates expansion, ⁇ , of the material, according to the following formula ( ⁇ 1 i T
  • T ⁇ the temperature gradient between T j -temperature at high altitudes (e.g., -10°C)
  • T 2 -temperature of air due to air friction against missile body e.g., +80°C
  • index-matching architectures are implemented to avoid such large ⁇ -values at mechanical interfaces.
  • attempts at active countermeasures may be utilized by adversaries.
  • anti-countermeasure techniques are employed to reduce false alarms caused by countermeasures. Examples include the use of spatial and temporal signatures. One such spatial signature has been illustrated in Figure 20, where two VCSEL linear arrays 20 1 and 2002, forming the shape of letter "T ⁇ have been used. In other embodiments, other spatial signatures.
  • distributions o f light sources may be used to produce a spatial signature for the optical proximity fuze.
  • Such spatial signatures in order to be recognized, has to be imaged at the detector space by using a 2D photodetector array.
  • masks may be used to provide a spatial signature.
  • Figure 21 illustrates a mask applied to an edge emitting laser source 2100. Masked areas 2101 are blocked from emitting light, while unmasked areas 2102 are allowed to emit light.
  • pulse length coding may be used to provide temporal signatures for anti-countermeasures.
  • Figure 22 illustrates such pulse length modulation.
  • matching a pre-determined pulse length code may be used to for anti- countermeasures.
  • the detection system may be configured to verify that the sequence indexed by k of pulse lengths, t 2k i + i-t 2k , matches a predetermined sequence.
  • the detection system may be configured to verify that the sequence of start and end times for the pulses matches a predetermined sequence. For example, in Figure 22, this temporal locations of zero points: ti 2201 , t 2 2202, 2203, 2204, ts 2205 are presented.
  • methods for edge detection are applied to assist in the use of spatial or temporal signatures.
  • a) de-convolution or b) novelty filtering is applied to received optical signals.
  • De-convolution can be applied to any spatial or temporal imaging.
  • Spatial imaging is usually 2D
  • temporal imaging is usually ID.
  • I j (x) ⁇ J +,JJ l j (x)exp(- ]2 ⁇ " ⁇ ⁇ x )d .:x (52) where / x is spatial frequency in number of lines per mm while ll(/ x ) is generally complex. Since, Eq.
  • (51) is convolution of h(x) atid I 0 (x); then, its Fourier transform, is ⁇ , (./ ⁇ ) - ⁇ (. ⁇ ) ⁇ ( ) (53 ) thus, and I 0 (x) can be found by de-convolution operation; i.e., by applying Eq. (54) and inverse Fourier transform of I 0 (f ) :
  • Novelty filtering is an electronic operation applied for spatial imaging purposes, it can be applied for such spatial signatures as VCSEL array pattern because each single VCSEL area has four spatial edges. Therefore, if we shift, in electronic domain, the VCSEL array image, by fraction of single VCSEL area and subtract un-shifted and shifted-images in spatial domain, we obtain novelty signals at the edges, as shown in ID geometry in Figure 23.
  • novelty filtering comprises determining a first spatial signature 2300 and shifting the spatial signature in the spatial domain to determine a second spatial signature 2301. Subtracting the two images 2300 and 2301 results in a set 2302 of novelty feature 2303 that may be used for edge detection.
  • Figure 24 illustrates multi-wavelength light source and detection implemented in accordance with an embodiment of the invention.
  • Figure 24A illustrates the light source in the source plane
  • Figure 24B illustrates the detector plane.
  • the axes are as labeled with respect to the plane of Figure 13 being the (X, Y)-plane.
  • two light sources 2400 and 2401 such as VCSEL arrays are disposed in (X. Z)- plane, and emit two wavelengths, ⁇ ] and ⁇ . respectively.
  • use of spherical lenses (not cylindrical lenses) in order to image 2D source plane into the 2D detector plane.
  • and D 2 , 2402 and 2403 are covered by narrow wavelength filters, as described above, corresponding to source wavelengths /. ( and . Assuming
  • FAR False Alarm Rate
  • FAlR ⁇ e " lT2 / 2I » 2 (56) 2xV3
  • I n noise signal (related to optical intensity)
  • l j threshold intensity
  • pulse temporal length.
  • Eq. (56) can be written as:
  • the second threshold probability is probability of detection. defined as probability that summary signal: I s + l n , is larger than threshold signal, I T : i.e.,
  • N(x) normal probability integral
  • erf(x) error function
  • N(x) erf(x/ )
  • the signal intensity, I s is defined by the application and specific components used, as illustrated above, while noise intensity, I n , is defined by detector's (electronic) noise and by optical noise.
  • D* 10 12 cmHz 1/2 W " ' .
  • B 5 MHz
  • D* 1 Q 12 cmHz , /2 W _ 1
  • A 5 mm x 5 mm - 0.25 cm 2 , and 1 / 2 ) 1 / 2
  • threshold value 1 1.
  • P D decreases, i.e., the system performance declines.
  • TFAR value also decreases; i.e., the system performance increases. Therefore, there is trade-off between those two tendencies, while threshold value, I T , is usually located between I n and I s -values: I n ⁇
  • Figure 25 illustrates a method of pulse detection using thresholding implemented in accordance wi th an embodiment of the invention.
  • Figure 25 A illustrates a series of pulses transmitted by a light source in an optical proximity fuze.
  • Figure 25B illustrates the pulse 2502 received after transmission of pulse 2051.
  • noise I n results in distortion of the signal.
  • a threshold L 2503 may be established for the detector to register a detected pulse. Accordingly, pulse start time 2504 and end time 2505 may be detected as the time when the wave 2505 crosses the threshold 2503.
  • the z.-parameter will be low; thus, probability of detection will be also low, while for a low I T -value 2503, x-parameter will be low; thus, the
  • a low pass filter is used in the detection system to smooth out the received pulse.
  • Figure 26 illustrates this process.
  • An initially received pulse 2600 has many of its high frequency components removed after passage through a low pass filter, resulting in smoothed wave pulse 2601. This low pass operation results in less ambiguity in the regions 2602 where the pulses cross the threshold value.
  • the x value is increased, with increasing (SNR)-value, due to Eq. (65), order to reduce tFAR -value, as in Eq. (57). This is because, with increasing (SNR)-value, due to the smoothing technique, as in Eq. (65), we can increase x-value, while keeping z-value constant, according to Eq. (66), results in minimizing TFAR -value, due to Eq. (57).
  • the threshold value, I T is defined by this new, improved trade-off.
  • a procedure of finding thethreshold value, (I j ) 0 is as follows.
  • STEP 1 Provide experimental realization of Figure 25B, in order to determine experimental value of optical intensity, I n '.
  • STEP 2 Determine, by calibration, the conservative signal value, I s , for a given phase of optical impact duration, including: rising phase, maximum phase, and declining phase.
  • I s the conservative signal value
  • the precision of the pulse length coding can be very high because it is based on a priori information which is known for the detector circuit, for example, using synchronized detection. However, even in the general case (67), the precision can be still high, since a priori information about variable pulse length can be also known for detector circuit.
  • multi-wavelength variable pulse coding may be implemented.
  • Figure 27 illustrates such an embodiment.
  • light sources of a plurality of light sources are configured to emit a first wavelength of light 2701 or a second wavelength of light 2702.
  • the light sources operate in a complimentary, or non-overlapping manner, such that different wavelengths 2704 and 2705 are always transmitted at different times.
  • the particular wavelengths and the pulse lengths allow for temporal and wavelength signatures that may be used for false alarm mitigation.
  • the light sources operate in an overlapping manner, resulting in times 2706 when both wavelengths are transmitted. As described above, the use of different filters allows both wavelengths to be detected, and the overlapping times provide another signature for false alarm mitigation
  • an energy harvesting subsystem 2800 may utilized to increase the energy available for the optical proximity detection system.
  • Current drawn from the projectile engine 2803 during flight time ⁇ i t) is stored in the subsystem 2800 and used during detection.
  • G 20
  • W 50 m
  • G 40.
  • the signal level, I s will increase proportionally; thus, also (SNR)-value; and we obtain,
  • FIG. 24 illustrates an energy harvesting subsystem 2800 implemented in accordance with this embodiment.
  • a rechargeable battery 2807 may be combined with a supercapacitor 2805, or either component may be used alone, for temporary electrical energy storage.
  • the supercapacitor 2805 is used in combination with the battery 2807. This allows the relative strengths of each system to be utilized.
  • a harvesting energy management module (HEMM) 2806 controls the distribution of the electrical power, from an engine 2803, P e j.
  • the power is stored in the battery 2807 or supercapacitor 2805 and then, transmitted into the sensor.
  • the electrical energy is stored and accumulated during the flight time At 0 (or, during part of this time), while transmitted into the sensor, during window time, W.
  • the HEMM 2806 may draw power from an Engine Electrical Energy (E3) module installed to serve additional sub-systems with power, in a particular embodiment, the battery's 2807 form factor is configured such that its power density is maximized; i.e., the charge electrode proximity (CEP) region should be enlarged as possible. This is because the energy can be quickly stored and retrieved from the CEP region only.
  • E3 Engine Electrical Energy
  • OIE optical impact effect
  • the upper graph 2901 illustrates a trajectory of a projectile.
  • the lower graph 2902 illustrates the means signal intensity received at a photodetector within the optical proximity fuze.
  • the time axis of both graphs is aligned for illustrative purposes.
  • the fuze is configured to activate the projectile at a predetermined distance yo 2907.
  • the activation distance 2907 is aligned with the end of the time window 2906 in which the target can be detected.
  • the predetermined activation distance can be situated at other points within the detection range.
  • the range in which the target can be detected 2909 is determined according to the position of the photodetectors relative to the receiving aperture of the optical proximity fuze.
  • the optical proximity fuze begins transmitting light towards the target.
  • Light begins being detected by the photodetector at the start of window 2906.
  • the mean intensity 2910 increases to a maximum value 2903 and then declines 2904 to a minimum value.
  • module does not imply that the components or functionality described or claimed as part of the module are all configured in a common package. Indeed, any or all of the various components of a module, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations. Additionally, the various embodiments set forth herein are described in terms of exemplary block diagrams, flow charts and other illustrations.

Abstract

An optical impact system controls munitions termination through sensing proximity to a target and preventing effects of countermeasures on false munitions termination. Embodiments can be implemented on in a variety of munitions such as small and mid caliber that can be applicable in non-lethal weapons and in weapons of high lethality with airburst capability for example and in guided air-to-ground and cruise missiles. Embodiments can improve accuracy, reliability and lethality of munitions depending on its designation without modification in a weapon itself and make the weapon resistant to optical countermeasures.

Description

OPTICAL IMPACT CONTROL SYSTEM
Cross-Reference to Related Applications
This application claims the benefit of U.S. Provisional Application No. 61/265,270 filed November 30, 2009 and U.S. Utility Application No. 12/916,147 filed October 29, 2010, which are hereby incorporated herein by reference in their entirety.
Technical Field
The present invention relates generally to optical detection devices, and more
particularly, some embodiments relate to optical impact systems with optical countermeasure resistance.
Description of the Related Art
The law-enforcement community and U.S. military personnel involved in peacekeeping operations need a lightweight weapon that can be used in circumstances that do not require lethal force. A number of devices have been developed for these purposes, including a shotgun-size or larger caliber dedicated launcher to project a solid, soft projectile or various types of rubber bullets, to inject a tranquilizer, or stun the target. Unfortunately, currently all these weapon systems can only be used at relatively short distances (approximately 30 ft.). Such short distances are not sufficient for the proper protection of law-enforcement agents from opposition force.
The limitation in the performance range of non-lethal weapon systems is generally associated with the kinetic energy of the bullet or projectile at the impact. To deliver the projectile to the remote target with the reasonable accuracy, the initial projectile velocity must be high - otherwise the projectile trajectory will be influenced by wind, atmospheric turbulence, or the target may move during projectile travel time. The large initial velocity determines the kinetic energy of a bullet at the target impact. This energy is usually sufficient to penetrate a human tissue or to cause large blunt trauma, thus making the weapon system lethal.
Several techniques have been developed to reduce the kinetic energy of projectiles before the impact. These techniques include an airbag inflatable before the impact, a miniature parachute opened before the impact, fins on the bullet opened before the impact to reduce the bullet speed, a powder or small particle ballast that can be expelled before the impact to reduce the projectile mass and thus to reduce its kinetic energy before the impact and so on.
Regardless of the technique used for the reduction of the projectile kinetic energy before the impact, it always contains some trigger device that activates the mechanism that reduces the projectile kinetic energy. In the simplest form it can be a timer that activates this mechanism at a predetermined moment after a shot. More complex devices involve various types of range finders that measure the distance to a target. Such range finder can be installed on the shotgun or launcher and can transmit the information about a target range to projectile before a shot. Such type of weapon may be a lethal to bystanders in front of the target who intercept the projectile trajectory after the real target range has been transmitted to the projectile. Weapon systems that carry a rangefinder or proximity sensor on the projectile are preferable because they are safer and better protected from such occasional events.
There are several types of range finders or proximity sensors used in bombs, projectiles, or missiles. Passive (capacitive or inductive) proximity sensors react to the variation of the electromagnetic field around the projectile when target appears at a certain distance from a sensor. This distance is very short (several feet, usually) so they have a short time for the slowdown mechanism to reduce projectile's kinetic energy before it hits the target. Active sensors use acoustic, radio frequency, or light emission to detect a target. Acoustics sensors require relatively large emitting aperture that is not available on a small-caliber projectiles. A small emission aperture also causes spread of radio waves into large angle so any object located aside of a projectile trajectory can trigger a slow-down mechanism thus leaving a target intact. In the contrast, light emission even from a small aperture available on small-caliber projectiles may be made of small divergence so only objects along the projectile trajectory are illuminated. The light reflected from these objects is used in optical range finders or proximity sensors to trigger a slow-down mechanism.
But although the emitted by an optical sensor light can be well collimated, the light reflected from a diffuse target is not collimated so the larger aperture of the receiving channel in optical sensor is highly desirable to collect more light reflected from a diffuse target and thus to increase the range of target detection and to provide more time for the slow-down mechanism to reduce the projectile kinetic energy before the target impact. A new generation of 40 mm low/medium-velocity munitions that could provide higher lethality due to airburst capability is needed. This will provide the soldiers with the capability to engage enemy combatants in varying types of terrain and battlefield conditions including concealed or defilade targets. The new munition, assembled with a smart fuze, has to "know" how far the round is from the impact point. A capability to burst the round at a predefined distance from the target would greatly increase the effectiveness of the round. The Marine Corps, in particular, plans to fire these smart munitions from current legacy systems (the M32 multishot and M203 under-barrel launcher) and the anticipated XM320 single-shot launcher.
Current technologies involve either computing the time of flight and setting the fuse for a specific time, or counting revolutions, with an input to the system to tell it to detonate after a specific number of turns. Both of these technologies allow for significant variability in the actual height of the airburst, potentially limiting effectiveness. Another solution is proximity fuzes, which are widely used in artillery shells, aviation bombs, and missile warheads; their magnetic, electric capacitance, radio, and acoustic sensors trigger the ordnance at a given distance from the target. These types of fuzes are vulnerable to EMI, are bulky and heavy, have poor angular resolution (low target selectivity), and usually require some preset mechanism for activation at a given distance from the target.
Brief Summary of Embodiments of the Invention
According to various embodiments of the invention an optical impact system is attached to fired munitions. The optical impact system controls munitions termination through sensing proximity to a target and preventing effects of countermeasures on false munitions termination. Embodiments can be implemented on in a variety of munitions such as small and mid caliber that can be applicable in non-lethal weapons and in weapons of high lethality with airburst capability for example and in guided air-to-ground and cruise missiles. Embodiments can improve accuracy, reliability and lethality of munitions depending on its designation without modification in a weapon itself and make the weapon resistant to optical countermeasures.
Other features and aspects of the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the features in accordance with embodiments of the invention. The summary is not intended to limit the scope of the invention, which is defined solely by the claims attached hereto. Brief Description of the Drawings
The present invention, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The drawings are provided for purposes of illustration only and merely depict typical or example embodiments of the invention. These drawings are provided to facilitate the reader's understanding of the invention and shall not be considered limiting of the breadth, scope, or applicability of the invention. It should be noted that for clarity and ease of illustration these drawings are not necessarily made to scale.
Some of the figures included herein illustrate various embodiments of the invention from different viewing angles. Although the accompanying descriptive text may refer to such views as ''top," "bottom" or "side" views, such references are merely descriptive and do not imply or require that the invention be implemented or used in a particular spatial orientation unless explicitly stated otherwise.
Figure 1 illustrates a first embodiment of the present invention.
Figure 2 illustrates a particular embodiment of the invention in assembled and exploded views.
Figure 3 is a schematic diagram illustrating two different configurations of light source optics using a laser source implemented in accordance with embodiments of the invention.
Figure 4 is a diagram illustrating three different detector types, implemented in accordance with embodiments of the invention. Figure 5 is a schematic diagram illustrating two different configurations of the detector optics implemented in accordance with embodiments of the invention.
Figure 6 illustrates the operation of a splitting mechanism according to an embodiment of the invention.
Figure 7 illustrates an embodiment of the invention implemented in conjunction with medium caliber projectiles with airburst capabilities.
Figure 8 illustrates a schematic diagram of electronic circuitry of implemented in accordance with an embodiment of the invention. Figure 9 illustrates a further embodiment of the invention.
Figure 10 illustrates an optical impact system with anti countermeasure functionality implemented in accordance with an embodiment of the invention.
Figure 1 1 illustrates the geometry of an edge emitting laser. Figure 12 illustrates an optical triangulation geometry.
Figure 13 illustrates use of source contour imaging (SCI) to find the center of gravity of a laser source's strip transversal dimension, implemented in accordance with an embodiment of the invention.
Figure 14 illustrates an imaging lens geometry. Figure 15 illustrates a method of detecting target size implemented in accordance with an embodiment of the invention.
Figure 16 illustrates an embodiment of the invention utilizing vignetting for determining if a target is within a predetermined distance range.
Figure 17 illustrates a lensless light source for use in an optical proximity sensor implemented in accordance with an embodiment of the invention.
Figure 18 illustrates a dual lens geometry.
Figure 19 illustrates two detector geometries for use with reflection filters implemented in accordance with embodiments of the invention.
Figure 20 illustrates a laser diode array having a spatial signature implemented in accordance with an embodiment of the invention.
Figure 21 illustrates a laser diode mask for implementing a spatial signature in accordance with an embodiment of the invention.
Figure 22 illustrates a laser light signal with pulse length modulation implemented in accordance with an embodiment of the invention. Figure 23 illustrates a novelty filtering operation for edge detection implemented in accordance with an embodiment of the invention. figure 24 illustrates multi-wavelength light source and detection implemented in accordance with an embodiment of the invention. Figure 25 illustrates a method of pulse detection using thresholding implemented in accordance with an embodiment of the invention.
Figure 26 illustrates a method of pulse detection using low pass filtering and thresholding implemented in accordance with an embodiment of the invention.
Figure 27 illustrates a multi-wavelength variable pulse coding operation implemented in accordance with an embodiment of the invention.
Figure 28 illustrates an energy harvesting sub-system implemented in accordance with an embodiment of the invention.
Figure 29 illustrates an optical impact profile during target detection in accordance with an embodiment of the invention. The figures are not intended to be exhaustive or to limit the invention to the precise form disclosed. It should be understood that the invention can be practiced with modification and alteration, and that the invention be limited only by the claims and the equivalents thereof.
Detailed Description of the Embodiments of the Invention
An embodiment of the present invention is an optical impact system installed on a plurality of projectiles of various calibers from 12-gauge shotgun rounds through medium caliber grenades to guided missiles with medium or large initial (muzzle) velocity that can detonate high explosive payloads at an optimal distance from a target in airburst configuration or can reduce the projectile's kinetic energy before hitting a target located at any (both small and large) range from a launcher or a gun. In some embodiments, optical impact system comprises a plurality laser light sources operating at orthogonal optical wavelengths and signal analysis electronics minimizes effects of laser countermeasures to reduce false fire probability. The optical impact system may be used in non-lethal munitions or in munitions with enhanced lethality. The optical impact system may include a projectile body, which it is mounted on, a plurality of laser transmitters and photodetectors implementing a principle of optical triangulation, a deceleration mechanism (for non-lethal embodiments) that is activated by an optical trajectory, an expelling charge with a fuse also activated by an optical impact system, and a projectile payload.
In a particular embodiment the optical impact system is comprised of two separate parts of the approximately equal mass. One of these parts includes a light source comprised of a laser diode and collimating optics that direct a light emitted by a laser diode parallel to the projectile axes. The second part includes receiving optics and a photodetector located in a focal plane of the receiving optics while being displaced at a predetermined distance from the optical axis of the receiving optics. Both parts of the optical impact system are connected to an electric circuit that contains a miniature power supply (battery) activated by an inertial switch during a launch, a pulse generator to send light pulses with a high repetition rate and to detect the reflected from a target light synchronously with the emitted pulses; and a comparator that activates a deceleration mechanism and a fuse when the amplitude of the reflected light exceeds the established threshold. In further embodiments, a spring or explosive between sensor parts separates the parts after they are discharged from the projectile.
In another embodiment, the optical impact system is disposed in an ogive of an airburst round. The optical impact system comprises of a laser diode with a collimating optics disposed along the central axes of a projectile and an array of photodetectors arranged in an axial symmetric pattern around the laser diode. When any light reflecting object intersects the projectile trajectory within a certain predetermined distance in front of the projectile, an optical impact system generates a signal to the deceleration mechanism and to the fuse. The fuse ignites the expelling charge that forces both parts of the proximity sensor to expel from a projectile. The recoil from the sensor expel reduces the momentum of the remaining projectile and reduces its kinetic energy so more compact deceleration mechanism can be used to further reduce the- projectile kinetic energy to a non-lethal level. The sensor expel also cleans the path to the projectile payload to hit a target. Without a restraint from a projectile body, springs initially located between two parts of a sensor force their separation such that each of them receives a momentum in the direction perpendicular to the projectile trajectory to avoid striking the target with the sensor parts. In this embodiment, the deceleration mechanism needs a certain time for the reduction of the kinetic energy of the remaining part of projectile to the safe level. The time available for this process depends on the distance at which a target can be detected. In some embodiments, an increase in detecting range at a given pulse energy available from a laser diode is achieved by using a special orientation of the laser diode wi th its p-n- junction being perpendicular to the plane where both the receiver and the emitter are located. In the powerful laser diodes used in the proximity sensors the light is emitted from a p-n junction that usually has a thickness of approximately 1 μιη and its width is several micrometers. After passing the collimating length, the light beam has an elliptical shape with the long axes being in the plane perpendicular to the p-n junction plane. The light reflected from a diffuse target is picked-up by a receiving lens, which creates an elliptical image of the illuminated target area in the focal plan. The long axis of this spot is perpendicular to the plane where a light emitter and a photodetector are located. The movement of the projectile towards the target causes displacement of the spot in the focal plane. When this spot reaches the photosensitive area on a photodetector, a photocurrent is generated and compared with a threshold value. The photocurrent will reach the threshold level faster with the spot oriented as described above so the sensor performance range can be larger and the time available for the deceleration mechanism to reduce the projectile velocity is larger thus enhancing security of the non-lethal munitions usage.
In further embodiments, an anti-countermeasure functionality of optical impact system is implemented to reduce a probability of false fire which can be caused by laser countermeasure transmitting at the same wavelength as an optical impact system and with the same modulation frequency. The anti-countermeasure embodiment of an optical impact system uses a plurality of light sources transmitting at different wavelengths and signal analysis electronics generates an output fire trigger signal only if reflected signal in both wavelengths with modulation frequency identical to the transmitting light will be detected. There is a low probability that a
countermeasure laser source will transmit a decoy irradiation in all plurality of an optical impact system wavelengths and modulation frequencies. An embodiment of the invention is now described with reference to the Figures, where like reference numbers indicate identical or functionally similar elements. The components of the present invention, as generally described and illustrated in the Figures, may be implemented in a wide variety of configurations. Thus, the following more detailed description of the embodiments of the system and method of the present invention, as represented in the Figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of presently preferred embodiments of the invention. Figure 1 illustrates a first embodiment of the present invention. The sensor 126 is designed to focus light on a surface, collect and focus the reflected light, and detect the reflected light. A sensor 126 includes a light source, such as a laxer diode 105. In some embodiments, the laser diode 105 may comprise a vertical cavity surface emitting (VCSEL) laser diode, or an edge-emitting laser diode such as a separate-confinement hetero structure (SCH) laser diode. The components of the sensor 126 are located in the main housing 132. Within the main housing 132 are the laser housing 101 and detector housing 1 18. The laser housing 101 contains the collimating optics 103 and laser diode 105. In some embodiments, the collimating optics 103 may comprise a spherical or cylindrical lens. The detector housing 1 18 contains the focusing lens 108 and detector 1 10. In some embodiments, the focusing lens 108 may be a spherical or cylindrical lens. A printed circuit board (PCB) 1 14, containing the electronics required to properly power the laser diode 105, is located behind the main housing 132. The main housing is insertable into a cartridge housing 133 to attach to the projectile.
In the illustrated embodiment, the sensor 126 also includes an optical projection system configured such that the light from the laser diode 105 is substantially in focus within a predetermined distance range. In the illustrated embodiment, the optical projection system comprises collimating lens 108 which intercepts the diverging beam (for example, beam 327 of Figure 3) coming from the laser diode 105 and produces a collimated beam (for example, beam 328 of Figure 3) to the illumination spot of the target surface (for example, target 339 of Figure 3). A collimated beam provides a more uniform light spot across a distance range compared to a beam focused to a particular focal point. However, in other embodiments, the projection system may include converging lenses, including cylindrical lenses, focused such that the beam is substantially in focus within the predetermined distance range. For example, the image plane may be at a point within the predetermined distance range, such that at the beginning of the predetermined distance range, the beam is suitably in focus for detection.
Naturally, different surfaces demonstrate various reflective and absorption properties. In some embodiment, to ensure that enough reflected light from various surfaces is reached at the receiving lens 108 and subsequently the detector 1 10 the operating power of the laser can be increased. This can be achieved while still maintaining low power consumption by modulating the laser diode 105. Furthermore, power the laser diode 105 in pulsed mode operation, as opposed to continuous wave (CW) drive, also allows higher power output. However, even with enough reflected light from the surface (for example, target 339 of Figure 3), the detection range of the sensor is inherently limited due to the field-of-view of the receiving optics 108 and its ability to collect and focus the reflected light to the detector 1 10. Accordingly, in some embodiments, the distance range that prompts activation of the fuze may be tailored according to these parameters. When any object is introduced into the path of the laser beam spot (for example, beam 328 of Figure 3), light is reflected from its surface. An optical imaging system, for example including an aperture and receiving lens 108 collects the reflected light and produces a converging beam (for example, beam 331 of Figure 3) to the detector 1 10. In some embodiments, only the detection of an object within a predetermined distance is required, and the detector 1 10 comprises only a single pixel, non position-sensitive detector (PSD). Furthermore, no specialized processing electronics for calculating actual distance is necessary.
Figure 2 illustrates a particular embodiment of the invention in assembled and exploded views. The illustrated embodiment may be used as an ultra-compact general purpose proximity sensor 227. the sensor 227 is designed to focus light on a surface, collect and focus the reflected light, and detect the reflected light. The sensor 227 consists of two separable sections; the laser housing 201 and the detector housing 218. the laser housing 201 has a mounting hold 202 in which the collimating optics 203, laser holder 204, laser diode 205, and laser holder clamp 206 are inserted. A PCB 214 mounts directly to the back of the laser housing 201 and contains a socket 217 from which the pins of the laser diode 205 protrude. The detector housing 218 has a mounting hole 219 in which the lens holder 207, focusing lens 208, lens holder clamp 209, photodetector IC 210, photodetector IC holder 21 1 , and several screws 212, 213, 215, 220, 221 , 222, 223. A battery compartment (not shown) may be positioned anterior to the housings 201 and 208 to power the system. Figure 3 is a schematic diagram illustrating two different configurations of light source optics using a laser source implemented in accordance with embodiments of the invention. In the first configuration 339, the laser 305 emits a beam 327. A circular lens 340 collects laser beam 327 and creates an expanded beam 341. A cylindrical lens 342 collects the expanded beam 341 and creates a collimated beam 328. In the second configuration 343, the laser beam 327 from the laser 305 is collected by a holographic light shaping diffuser 344, which produces a collimated beam 328. Figure 4 is a diagram illustrating three different detector types, implemented in accordance with embodiments of the invention. The first type is a non position-sensitive detector (PSD) 445, which has a single-pixel 446 as the active region. The second detector type shown is a single-pixel PSD 447. Though only a single-pixel 448, its active area is
manufactured in various lengths and is capable of detecting in one dimension such as in distance measurement. This single-pixel PSD 447 generates a photocurrent from the received light spot from which its position can be calculated relative to the total active area. The third detector type shown is a single-row, multi-pixel PSD 449, which is also capable of detecting in one dimension. In this detector's 449 configuration, the active area 450 is implemented as a single row of multiple pixels. With detector 449 position may be determined according to which pixels of the array are illuminated.
Figure 5 is a schematic diagram illustrating two different configurations of the detector optics implemented in accordance with embodiments of the invention. In the first configuration 551 , the reflected beam 530 enters the focusing lens 508 from an angle. To compensate for the angle of the incoming reflected beam 530, the detector 510 is shifted perpendicularly from the optical axis 552 of the focusing lens 508. In the second configuration 553, only the reflected beam 530 enters the microchannel structure 555, while stray light 554 will be blocked.
Figure 6 illustrates the operation of a splitting mechanism according to an embodiment of the invention. Upon detection of target 606 within a predetermined distance range of the projectile, an explosive charge 605 ejects the laser housing 602 and the detector housing 603 from the cartridge 601. In some embodiments, this also assists in slowing the projectile. Once ejected springs 604 separate the laser housing 602 and the detector housing 603, thereby clearing the projectile's trajectory. In an alternative embodiment, rather than, or in addition to, springs 604, an explosive charge may be used to separate housings 602 and 603. Figure 7 illustrates an embodiment of the invention implemented in conjunction with medium caliber projectiles with airburst capabilities. The illustrated embodiment comprises a compact proximity sensor attached to an ogive 704 of a medium caliber projectile. The laser diode 701 emits a modulated laser beam oriented along the longitudinal axes of the projectile and which is collimated by a collimating lens 702. Photodetectors 708 are arranged in an axial symmetrical pattern around the laser diode 701. Optical arrangement of a focusing lens 709 and a photodetector 708 produces an output electrical signal 712 from a photodetector only if a reflecting target 705 or 713 is located in front of the projectile at a distance less than a predefined standoff range, Λ target 714 located at a distance longer than a standoff range does not produce an output electrical signal 1712. An array of axial symmetrical detectors makes target detection more reliable and enhances detector sensitivity. Output analog electrical signals from each photodetector 708 are gated in accordance with the laser modulation frequency and then, instead of immediate thresholding, they are transmitted to electronic circuitry 710 for summation.
Summation of signals increases the signal to noise ratio. After summation the integrated signal is thresholded and delivered to a safe & arm 71 1 device of the projectile initiating its airburst detonation.
Figure 8 illustrates a schematic diagram of electronic circuitry of implemented in accordance with an embodiment of the invention. When the projectile receives acceleration in the barrel, an accelerometer 816 initiates operation of a signal generator inside a microcontroller 817, which produces identical driving signals 818 to start and drive a laser driver 820 and gaiting electronics 821 of a photodetector. An optical receiver 821 receives the light signal reflected from a target surface 805 and generates an output analog electrical signal, which is gated 822 and detected synchronously with a laser diode 801 operation. Gated signals are conditioned 823 and summated in a microcontroller 817. The output threshold signal 824 releases the safe & arm device of the projectile, which initiates a projectile explosive detonation. A power conditioning unit 815 supplies with electrical power a laser driver 820, microcontroller 817 and an
accelerometer switch 16.
Figure 9 illustrates a further embodiment of the invention. The optical impact system 902, 903, 904 and 905 in the illustrated embodiment is attached to a missile projectile 901. The air-to-ground guided missile approaches to a target 908, 909 under variable angle. In this embodiment, the missile trajectory is stable (not spinning). The optical impact system has a down looking configuration enabling it to identify the appearance of a target at a predefined distance and trigger a missile warhead detonation in an optimal proximity to the target. A laser transmitter 903 of an optical impact system transmits modulated light 906, 910 toward a potential target 908, 909. The light reflected from a target depending on a distance to the target can either impact 907 the photodetector 904 or miss 91 1 the photodetector. Control electronics 905 for driving and modulation of laser light and for synchronous detection of a reflected light is disposed inside the optical impact system housing 902.
Figure 10 illustrates an optical impact system with anti countermeasure functionality implemented in accordance with an embodiment of the invention. Optical impact system anti countermeasure functionality can be implemented by a plurality of laser sources 1001 , 1002 operating in different wavelengths. The laser sources are controlled by an electronic driver 1003 which provides amplitude modulation of each laser source and controls synchronous operation of a photodetector 1005. The plurality of laser beams at a plurality of wavelengths is combined into a single optical path 1013 using time domain multiplexer and a beam combiner 1004. The light reflected from a target 1016 located at a predefined distance contains all transmitted wavelengths 1014. It will be acquired by a receiving tract comprising a photodetector 1005, comparator 1006, demultiplexer 1008 and signal analysis electronics 1009 and 1010 for each plurality of input signals. Electronic AND logic circuit 101 1 will generate output trigger signal 1012 only if valid signal will be presented in each of wavelengths channels. Laser countermeasure 1015 will operate with high probability at a single wavelength and will deliver a signal to AND logic only in one channel thus output trigger signal will not be generated.
Figure 1 1 illustrates the geometry of an edge emitting laser. In some embodiments of the invention, the light from the laser source is projected onto a target and imaged at a photodetector. As used herein, the term "Source Contour Imaging" (SCI) means low-resolution imaging of source's strip thickness. As illustrated in Figure 1 1 , a laser source 1 101 has a thickness Au, 1 102, which will be used in calculations herein. In various embodiment, the source strip parameters are controlled for optical triangulation (OT) which is applied for SCI sensing. The OT-principle is based on finding location for center of gravity of the source strip, by two-lens system. In some embodiments, both lenses (one at the emitter and one at the detector) are applied for imaging of ID-dimension; thus, both are cylindrical with lens curvature in the same plane which is also the plane perpendicular to the sources strip.
Figure 12 illustrates an optical triangulation geometry. Knowing one side (FG) 1202 and two close angles (φ 1203, φο 1201)of the triangle FEG 1205, as in Figure 12, we can find all remaining elements of the triangle, such as sides a 1207 and b 1206, and its height EH 1208.
Point G 1204 is known (it is the center of the laser source), and angle φ0 1201 , is known (it is the source's beam direction). When we measure the center of gravity of Source Contour Image (SCI) strip, we determine point F 1209, then side: c = FG 1202 is found, and also angle φ 1203 is found. Therefore, according to OT-principle, all other triangle elements are found. In practical case, c«a, and c«b. This is because a, b are on the order of meters, while c is on the order of centimeters. Therefore, both angles (φ, φ0) must be close to 90°. According to Figure 2, EH
1208= a είηφ. However, the accuracy of φ-angle measurement is very good: 5c _ 20 μηι
δφ = 2 - 10 (1) a 10 m
This is because the center of gravity F 1209 is measured with accuracy: 6c = 20 μπι, or even better, as discussed later. Therefore, the measured height, (EH)', is (since: δφ«1):
(EH)1 = a 8Ϊη(φ + δφ) = EH + ad§ (2) i.e., measured with high accuracy, in the range of 10-20 μηι.
Figure 13 illustrates use of source contour imaging (SCI) to find the center of gravity of a laser source's strip transversal dimension, implemented in accordance with an embodiment of the invention. As illustrated, a laser source disposed in a sensor body projects a laser beam 1310 to a target 131 1. The target 131 1 is assumed to be a partially Lambertian surface, for example, a 10% Lambertian surface. A reflected beam 1312 is reflected from the target 131 1 and detected at the detector 1312. In this figure, the source strip 1301, with center of gravity, G 1302, and size, Δυ 1303, is collimated by lens 1 (LI) 1304, with focal length, fj 1305, and size, D, while imaging lens (L2) 1306 has dimensions f2 1307, and D2, respectively. For simplicity, in the illustrated embodiment, we assume i\ = f2 = f , and D] = D2 = D. (In other embodiments, these parameters may vary. For example, the 2nd lens may be larger to accommodate larger linear pixel area). The size of the source beam at distance-f, is, according to Figure 13: · Λιι / · An f
DB = 2 0 + D = + D = + ^ (3) f f f #
Where, for Θ«1 , Θ = Au/2f, and f# = f/D is so-called f-number of the lens. A typical, easy-to- fabricate (low cost) lens usually has f# > 2. As an example, for f# = 2, 1 = 10 m, f = 2 cm, and Au = 50 μιη, we obtain
~ 10 ηι χ 50 μιη 2 cm (104 mm)(0.05 mm) ,
DB = + + - ~ + l cm =
2 cm 2 20 mm (4)
= 2.5 cm + 1 cm = 3.5 cm
Eq. (3) can become: (5)
Figure imgf000015_0001
where the 2nd term does not depend on source's size. This term determines the size of the source's image spot on the target, and accordingly contributes to the power output required of the laser. In order to reduce this term, some embodiments use reduced lens sizes. The distance to the target 1307, E, is predetermined according to the concept of operation (CONOPS), and f#- parameter defines how easy is to produce the lens and will also be typically fixed. Accordingly, the f-parameter frequently has the most latitude for modification. For example, reducing focal length by 2-times, the 2nd factor will be reduced 4-times, to 2.5 mm, vs. 2.5 cm value of the 1 st term.
As illustrated in Figure 13, the size of source contour image (SCI), Aw 1308, is
Figure imgf000016_0001
where / is a correction factor, which, in good approximation, assuming angle ACB 1313 close to 90°, is equal to: ~ (7) cos(a + p)
Since, χ≤ 1 , and h = £ , Eq. (6) can be approximated by: f2
Aw : Λιι I (8)
hf # which is approximately constant, assuming Au, f, f#, and h-parameters fixed. Assuming, as an example, Au = 50 μηι, f = 2 cm, h = 10 m, f# = 2, we obtain
Aw = 50 μπι + 20 μηι + 70 μιη (9)
Eq. (6) is based on a number of approximations which are well satisfied in the case of low- resolution imaging such as the SCI.
As illustrated in Figure 13, in some embodiments, SCI is based on the approximate formula that resulting under the assumption that instead of imaging contour area AB 1314, its projection CB 1 315 may imaged. Furthermore, a second assumption is that area AB may be imaged, instead of CB (i.e., that we can assume β = 0). However, AB 1314 is a part of Lambertian surface of the target 131 1 , which means that each point of an AB-area reflects spherical waves (not shown) as a response to a collimated incident beam 1310 produced by source 1301 with center of gravity G 1302, and strip's size \u 1303.
Figure 14 illustrates an imaging lens geometry. In order to show that area CB 1313 indeed images (approximately) into an area about Aw's size 1308, consider simple imaging lens 1403 geometry, as in Figure 14, where the x parameter 1401 is an object point 1402 (P) plane's distance from a lens, while y 1404 is its image (Q) 1405 plane's distance from lens 1403. The image sharpness is determined according to the de-focusing distance, d 1406, and de-focusing spot, g 1407, with respect to focal plane. The lens image equation, is
1 1 1 1 x I"
(10) x y I' y x · f
The de-focusing distance, d is (x»f), f f2 f- d = y ^ f = ^ f = _ ≥— (1 1)
X f X f and, using trigonometric sine theorem, we obtain
D g d - D d - D d
= - => g = = =— (12) y d y f f#
Using Eq. (1 1) and the geometry of Figure 14 (x = h), we obtain d f2
g =— = (13) f# f# h
For example, for f = 1 cm, f# = 2, and h = 10 m, we obtain g = 5 pm; i.e., 10% of source's strip size (50 μηι).
In order to verify the 2nd assumption that we can approximate position of AB-contour by its CB-projection, the influence of AC-distance (Ad) on image dis-location may be analyzed. In such a case, instead of de-focusing distance, d, we introduce new de-focusing distance, d', in the form:
Figure imgf000018_0001
i.e., this dis-location is (Ah/h)-times smaller than d-distance, which is equal to f-/ . For example, for f = 1 cm, and h = 10 m, we obtain d = 10 μπι, and (Ah/h) = (AC/h)≡ 2 cm/ 10 m = 0.002; i.e., in very good approximation; d' = d, and treating the imaging of contour AB as equivalent to imaging of its projection, CB results in reasonable imaging.
Figure 15 illustrates a method of detecting target size implemented in accordance with an embodiment of the invention. Figure 15 uses the same basic geometry and symbols as Figure 13, for the sake of clarity. Points G 1501 and F 1502 are centers of lenses LI 1507 and L2 1508, respectively, and vector v 1503 represents the velocity of missile 1509 in the vicinity of the target 1510. During time duration At 1504, missile 1509 traverses distance vAt. The angles a and β are equivalent to those in Figure 13. Angles φ and φ0 are equivalent to those in Figure
12. Distance £ 1505 is within the predetermined distance range for triggering the missile 1509 to explode. For example, distance / 1505 may be an optimal predetermined target distance, and the predetermined distance range may be a range around distance / 1065 where target sensing is possible. At an initial distance, due to the detection system geometry or laser power, the target 1510 become initially detectable. This allows detection of the target 1510 through a As-target area 1506, during time, At 1504.
From the sine theorem, we have: p «
(15) sin(90° + a + j sin 6 where γ is angle between missile speed vector, v 1503, and the surface of target 1510, while: sin(90° + α + β) = cos( + β), and the angle, 6, is
5 = 180° - y - | 0° + α + β)= 90° - (γ + α + β) (15) thus, Eq. (15) becomes: f— S . (17) cos(a + β J cos(y + a + β)
According to Thales' Theorem, we have: vAt _ As
~Ύ ~ (18) s
Substituting Eq. (17) into Eq. (18), we obtain
. s 4 cos(y + α + β) ,
As = vAt— = vAt = χ0νΔΐ . (19)
I cos(a + β)
For typical applications, γ-angle is close to 90°, while angles a and β are rather small (and angle δ is small). For example, assuming δ = 10°; so, γ + α + β = 80°, and α + β = 20°, we obtain χ0 = 0.18, and, for vAt = 10 m, we obtain
As = (0.18)(l 0 m) = 1.8 m . (20)
In a typical application, assuming v · At = 10 m, and v = 400 m/sec, for example, we obtain At = — = 0.025 sec = 25 msec . (21)
400 m / sec
This illustrates typical times, At, that are available for target sensing. Therefore, in this example, the detection system can determine that the detected target has at least one dimension greater than or equal to 1.8 m size. This provide a counter-countermeasure (CCM) against obstacles smaller than 1.5 m. In order to increase the CCM power, we should increase %0-factor by increasing angle, δ. For example, if the missile 1509 has a more inclined direction, by reducing angle, γ, As 1506 increases. For example, for 20°, and the same other parameters, we obtain χ0 = 0.36, and As = 3.6 m.
In embodiments utilizing a photodetector having a major axis (for example,
photodetectors 447 and 449 of Figure 4), the distance As 1506 may be increased by positioning the major axis in the plane of Figure 15. In a further embodiment, the photodetector comprises a quadratic pixel array. In this embodiment, control logic is provided in the detection system to automatically select the (virtual) linear pixel array with minimum size. In still further embodiments, a plurality of photodetectors is positioned radially around the detector system, for example as described in Figure 7, In these embodiments, control logic may be configured to select the sensor which is located most closely to the plane of Figure 15 for target detection.
Figure 16 illustrates an embodiment of the invention utilizing vignetting for determining if a target is within a predetermined distance range. In the illustrated embodiment, optical proximity sensor 1600 emits a light beam 1606 from a light source 1601. The sensor 1600 is coupled to a projectile that is moving towards a target. In the sensor's frame of reference, this results in the target moving towards the sensor 1600 with velocity v 1613. For example, in the illustrated embodiment, the target moves from a first position 1612, to a second position 161 1 , to a third position 1610. The sensor 1600 include a detector 1604. The detector 1604 comprises a photodetector 1603 positioned behind an aperture 1614. In the illustrated embodiment, lenses are foregone, and target imaging proceeds with vignetting or shadowing, alone. For example, when the target is at the third position 1610 at distance h3 from the sensor 1600, the reflected light beam 1607 strikes a wall 1602 of the detector 1604 rather than the photodetector 1603. In contrast, the entire reflected beam 1609 from the first target position 1612 impinges the photodetector 1603. As the Figure illustrates, there is a target position 1612 where the edge of the imaged beam 1605 abuts the edge of the photodetector 1603. As the sensor 1600 moves closer to the target, less and less of the beam will impinge the photodetector 1063, until the beam no longer impinges the photodetector 1603 (for example, at position 1610). Similarly, as the sensor 1600 first comes within range of the target, the beam will partially impinge on the photodetector 1603. The beam will then traverse the detector until it fully strikes the
photodetector 1603. Accordingly, as the sensor traverses the predetermined distance range, the signal from the photodetector will first rise, then plateau, then begin to fall. In an embodiment of the invention, the specific detonation distance within this range is chosen when the signal begins to fall, or has fallen to some predetermined level (for example, 50% of maximum). Accordingly, the time in which the signal increases and plateaus may be used for target verification, while still supporting a relatively precise targeting distance for detonation.
Figure 17 illustrates a lensless light source for use in an optical proximity sensor implemented in accordance with an embodiment of the invention. In some embodiment, the light source 1700 can also be vignetted. Figure 17 illustrates variables for quantitative analysis purposes. Variables include vignetting opening 1701 size, Aa, source size, Au, vignetting length, s, and resulting source beam divergence, 2Θ. Then, the source beam size, AB, at the target distance, h, is ΑΒ = 2Θ (1ι + 82)≤2Θ1ι (31) Since, s2«h, as in Figure 17. From this figure, we have; and sj + S | = s (32)
S- Si
Solving Eqs. (32), we obtain
„ _ „ _ S^ CX X\ Sj — , S-> — \J->)
1 + k 1 + k where k is called vignetting coefficient, being the ratio of vignetting opening size to source size: k =— (34)
Au usually k > 1 for practical reasons. For example, for Au = 50 μm (for edge-emitter strip size), Aa = 100 μm can be easy achieved; then, k = 2. Substituting Eq. (33) into Eq. (31), we obtain
AB =— (1 + k) (35) s
For example, for k = 2, Au = 50 μιη (then, Aa = 100 μηι), s = 5 cm, and h = 10 m, we obtain AB = 3 cm.
In further embodiment, the light source may be imaged directly onto the target area. A Lambertian target surface backscatters the source beam into detector area where a second imaging system is provided, resulting in dual imaging, or cascade imaging. Figure 18 illustrates variables of a lens system for quantitative analysis purposes. In various embodiments, the viewing beam imaging can be provided with single-lens or dual-lens system. Consider imaging equation in the form: x~' + y~' · = where x and y are distance of object plane and image plane from lens and f is focal length. Then, in order to obtain single lens imaging with short x-value (for example, a few cm) and long y-value (for example, ys 10 m), we need to place the source close behind the focus, at distance, Ax; xf f2 f
Ax = x - f = f = =— (36) y -f y - f y For example, for f = 2 cm and y = 10 m, we obtain Δχ - 40 μη which is very small value for precise adjustment. The positioning requirements can be made less demanding by utilizing a dual-lens imaging system .
Figure 18 illustrates a dual lens geometry. Two convex lenses, 1801 and 1802, are provided for source (viewing) beam imaging, with focal lengths fj and f2, including imaging equation for 1st lens (xj, y] ; f[) and imaging equation for the 2nd lens (x2, y2, f?). A point source, O, is included, for simplicity, with its image, O'. In the illustration, the source is placed at the front of the 1 st focus, F\ , with Δχ] distance from the focal plane. Then, the 1st image is imaginary, with negative distance: yj = -|yj |, where |...| is module operation, and the 1st image equation has the form: (37)
Figure imgf000022_0001
and, f2 f2
Δχ, = f, - X ] = ^ Ί-Ί (38) fi + |yi | |yi |
For |y j |»fj . For example, for f j = 3 cm and ΔΧ| = 0.5 mm, we obtain |yj | = 1. 8 m. A 0.5 mm adjustment may be more manageable than a 40 μιη adjustment, as for single-lens system. Now, we assume the 1st imaginary image to be the 2nd real object distance; x2 = |yj |. Therefore, the required 2nd lens focal length, f2, is
Figure imgf000022_0002
and, f2 < 2 ' f2 < |yi | = 1. m (40) as expected. In this case, the system magnification, is
Figure imgf000022_0003
and the final image size for edge-emitter strip size of 50 μηι will be: (333)(50 μηι) = 1.66 cm. For this dual-lens system, by adding two image equations together, we obtain the following summary image equation:
Figure imgf000023_0001
where f$ is dual-lens system focal length.
In typical embodiments, the lens curvature radius, R, is larger than the half of the lens size, D; R > D/2. However, for a plano-convex lens, we have: f 1 = (n-l)R" 1, where n is refractive index of the lens material (n = 1.55); thus, approximately, we have: f≡ 2R, while for double-convex lens: f≡ R. Also, for cheaply and easily made lenses lenses, the f#-ratio parameter (f# = f/D) will typically be larger than 2: f#>2. Using this relation, for plano-convex lens we obtain R I ), and for double convex: R>2D; i.e., in both cases: R I) 2. as it should be in order to satisfy system compactness.
Potential sources of interference and false alarms include natural and common artificial light sources, such as lightning, solar illumination, traffic lighting, airport lighting, etc... In some embodiments, protection from these false alarm sources is provided by applying narrow wavelength filtering centered around the laser diode wavelength, λ0. In some embodiments, dispersive devices (prisms, gratings, holograms), or optical filters, are used. Interference filters, especially reflective ones, have higher filtering power (i.e., high rejection of unwanted spectrum while high acceptance of source spectrum) at the expense of angular wavelength dispersion. In contrast, absorption filters have lower filtering power while avoiding angular wavelength dispersion. Dispersive devices such as gratings are based on grating wavelength dispersion. Among them, volume (Bragg) holographic gratings have the advantage of selecting only one diffraction first order (instead of two, as in the case of thin gratings); thus, increasing filtering power by at least a factor of two.
Reflection interference filters have higher filtering power than transmission ones due to the fact that it is easier to reflect a narrower spectrum than a broader one. For example, a Lippmann reflection filter comprises a plurality of interference layers that are parallel to the surface. Such filter can be made either holographically (in which case, the refractive index modulation is sinusoidal), or by thin-film-coating (in which case, the retractive index modulation is quadratic).
From coupled-wave theory, in order to obtain 99%-diffractive efficiency, the following approximate condition has to be satisfied:
\ll · I
. = i (43) where Δη is refractive index modulation, and λ0' is central wavelength in the medium, with refractive index, n. Since, Λ = λ0/2η, Δη/η = Δλ/λ, and Δη = λ/ηΤ, we obtain
Δλ 2
— =— (44) λ n N where N = Τ/Λ is the number of periods, or number of interference layers. For typical polymeric (plastic) medium, we have n = 1.55; so, Eq. (44) becomes
— = 1.29— . (45) λ N
For example, for λ0 = 600 nm, Δλ = 10 nm, Δλ/λ = 1/60 = 0.0167, and N = 77. Accordingly, in order to obtain higher filtering power, the number of interference layers should be larger.
For slanted incidence angle, <->', in the medium (where for Θ' = 0, we have normal incidence), the Bragg wavelength, λ0, is shifted to shorter values (so-called blue shift): = X0' cose' (46) therefore, relative blue-shift value, is
— = l - cos0' . (47) λ0
Using Snell's law: sin© = nsin©', we obtain for Θ' « 1,
Figure imgf000024_0001
For example, for δλ = 10 nra, λ = 600 nm, n = 1.55, we obtain Θ = 16.4°. Therefore, the total spectral width is: Δλ + δλ; i.e., about 20 nm in this example.
Figure 19 illustrates two detector geometries for use with reflection filters implemented in accordance with embodiments of the invention. In detector 1902, an aperture is formed in a detector housing 1903. In some embodiments, imaging is based on vignetting entirely. In other embodiments, lens or mirror based imaging systems may be combined with the aperture. The detector is configured to receive a beam 1910 reflected from a target. A reflective filter 1905 is configured to reflect only wavelengths near the wavelength or wavelengths of the laser light source or sources used in the proximity detector. Accordingly filter 1905 filters out likely spurious light sources, reducing the probability of a false alarm. Filter 1905 is configured to reflect light at an angle to detector 1907. For example, such non-Lippman slanted filters may be produced using holographic techniques. In detector 1902, a Lippman filter 1906 is disposed at an angle with respect to the aperture, allowing beam 1909 to be filtered and reflected to detector 1908 as illustrated.
Another potential source of false alarms is from environmental conditions. For example, optical signals can be significantly distorted, attenuated, scattered, or disrupted by harsh environmental conditions such as: rain, snow, fog, smog, high temperature gradient, humidity, water droplets, aerosol droplets, etc. In some embodiments of the invention, in order to minimize the false alarm probability against these environmental causes, we maximize laser diode conversion efficiency and also maximize focusing power of optical system. This is because, even in proximity distances (10 m, or less), beam transmission can be significantly reduced by transmission medium (air) attenuation, especially in the case of smog, fog, and aerosol particles, for example. For strong beam attenuation of 1 dB/m, the attenuation at 10 m- distance is 90%. Also, optical window transparency can be significantly reduced due to dirt, water particles, fatty acids, etc. In some embodiments, the use of a hygroscopic window material protects against the latter factor.
In some embodiments of the invention, high conversion efficiency (ratio of optical power to electrical power) can be obtained using VCSEL-arrays. In further embodiments, the VCSEL arrays may be arranged in a spatial signature pattern, further increasing resistance to false alarms. For example, Figure 20 illustrates a VCSEL 2000 array arranged in a "T"-shaped distribution. Arranging the laser diodes into a desired spatial distribution avoids signature masks which would block some illumination; thus, reducing optical power, or effective conversion efficiency, T|eff, that is defined, as:
%ff = 1l l - Ά2 (49) where rj | is the common conversion efficiency, and η2 - is masking efficiency.
In further embodiments, beam focusing lens source geometries such as projection imaging and detection imaging, as discussed above, provide further protection from beam attenuation. To further reduce attenuation, system magnification M, defined by Eq. (41), is reduced by by increasing/] -value. In order to still preserve compactness, at least, in vertical dimension, in some embodiments, horizontal dimension is increased by using mirrors or prisms to provide a periscopic system.
High temperature gradient (~100°C) can cause strong material expansion; thus, reducing mechanical stability of optical system. In some embodiments, the effects of temperature gradients are reduced. The temperature gradient, ΔΤ, between Tj -temperature at high altitudes (e.g., -10°C), and T2-temperature of air due to air friction against missile body (e.g., +80°C) creates expansion, Δ , of the material, according to the following formula (ΔΤ 1 i T |
— = α · ΔΤ (50)
£ where a is linear expansion coefficient in 10~6 ("C)" 1 units. Typical (/.-values are: A £ - 17, steel - 1 1 , copper - 17, glass - 9, glass (pyrex) - 3.2, and fused quartz - 0.5. For example, for a = 10-6 (0C)- ' , and ΔΤ = 100°C, we obtain A £ /£ = 10"4, and for £ = 1 cm, A £ = 1 pm. This is a small value but it can cause problems for metal-glass interfaces. For example, for steel/quartz interface: Δα = (1 1 - 0.5) 10"6 (°C-'), and for ΔΤ = 100°C, and £ = 1 cm, we obtain
δ(Δ £ ) = (1 1 - 0.5) 10"4 cm≡ 10'3 cm = 10 pm which is larger value for micro-mechanic architectures (1 mill = 25.4 μιη, which is approximate thickness of human hair). In some embodiments, index-matching architectures are implemented to avoid such large Δα-values at mechanical interfaces.
Additionally, attempts at active countermeasures may be utilized by adversaries. In some embodiments, anti-countermeasure techniques are employed to reduce false alarms caused by countermeasures. Examples include the use of spatial and temporal signatures. One such spatial signature has been illustrated in Figure 20, where two VCSEL linear arrays 20 1 and 2002, forming the shape of letter "T\ have been used. In other embodiments, other spatial
distributions o f light sources may be used to produce a spatial signature for the optical proximity fuze. Such spatial signatures, in order to be recognized, has to be imaged at the detector space by using a 2D photodetector array. In other embodiments, masks may be used to provide a spatial signature. For example. Figure 21 illustrates a mask applied to an edge emitting laser source 2100. Masked areas 2101 are blocked from emitting light, while unmasked areas 2102 are allowed to emit light.
In further embodiments, pulse length coding may be used to provide temporal signatures for anti-countermeasures. Figure 22 illustrates such pulse length modulation. In some embodiments, matching a pre-determined pulse length code may be used to for anti- countermeasures. For example, the detection system may be configured to verify that the sequence indexed by k of pulse lengths, t2ki+i-t2k, matches a predetermined sequence. In other embodiments, the detection system may be configured to verify that the sequence of start and end times for the pulses matches a predetermined sequence. For example, in Figure 22, this temporal locations of zero points: ti 2201 , t2 2202, 2203, 2204, ts 2205 are presented. These zero points may be compared by the detector against a predetermined sequence to verify target accuracy. In some embodiments, methods for edge detection, both spatially or temporally, are applied to assist in the use of spatial or temporal signatures. In order to improve edge recognition in both spatial and temporal domain, in some embodiments, a) de-convolution or b) novelty filtering is applied to received optical signals.
De-convolution can be applied to any spatial or temporal imaging. Spatial imaging is usually 2D, while temporal imaging is usually ID. Considering, for simplicity, ID spatial domain, the space-invariant imaging operation can be presented as (assuming M = 1):
Ii (x) = Jh(x - x') l0 (x)dx (51) where Ij and I0 are image and object optical intensities, respectively, while h(x) is so- called Point-Spread-Function (PSF), and i ts Fourier transform is transfer function, H(/x ) in the form: H(/x ) = F|I j (x)} = J +,JJl j (x)exp(- ]2π " χ · x )d .:x (52) where /x is spatial frequency in number of lines per mm while ll(/x ) is generally complex. Since, Eq. (51) is convolution of h(x) atid I0(x); then, its Fourier transform, is ΐ, (./χ ) - Η(. χ )ίο( ) (53 ) thus,
Figure imgf000028_0001
and I0(x) can be found by de-convolution operation; i.e., by applying Eq. (54) and inverse Fourier transform of I0 (f ) :
Figure imgf000028_0002
-oo
Such operation is computationally manageable if H -function does not have zero values, which is typical the case for such optical operations as described here. Therefore, even if image function Ij(x) is distorted by backscattering process, and by de-focusing, it can still be restored for imaging purposes.
Novelty filtering is an electronic operation applied for spatial imaging purposes, it can be applied for such spatial signatures as VCSEL array pattern because each single VCSEL area has four spatial edges. Therefore, if we shift, in electronic domain, the VCSEL array image, by fraction of single VCSEL area and subtract un-shifted and shifted-images in spatial domain, we obtain novelty signals at the edges, as shown in ID geometry in Figure 23. As illustrated in Figure 23, novelty filtering comprises determining a first spatial signature 2300 and shifting the spatial signature in the spatial domain to determine a second spatial signature 2301. Subtracting the two images 2300 and 2301 results in a set 2302 of novelty feature 2303 that may be used for edge detection.
Figure 24 illustrates multi-wavelength light source and detection implemented in accordance with an embodiment of the invention. Figure 24A illustrates the light source in the source plane, while Figure 24B illustrates the detector plane. In this Figure, the axes are as labeled with respect to the plane of Figure 13 being the (X, Y)-plane. In the illustrated embodiment, two light sources 2400 and 2401 , such as VCSEL arrays are disposed in (X. Z)- plane, and emit two wavelengths, λ] and λ^. respectively. In the illustrated embodiment, use of spherical lenses (not cylindrical lenses) in order to image 2D source plane into the 2D detector plane. The detectors D | and D2, 2402 and 2403, are covered by narrow wavelength filters, as described above, corresponding to source wavelengths /. ( and . Assuming |λ-2 - λ] | > 50 ran , we can apply narrow filter with Δλ ] = Δλο = 20 nm, for example, thus: Δλ + δλ≡ 30 nm to achieve good wavelength separation. It is convenient to place both detectors in the same optical system in order to achieve the same imaging operation for both sources. (This is, however, unnecessary.) As a result, we obtain two orthogonal image patterns when we can add any temporal coding for further false alarm reduction.
The precision of temporal edge detection is defined by the False Alarm Rate (FAR), defined in the following way:
FAlR = ^ e" lT2 / 2I » 2 (56) 2xV3 where In is noise signal (related to optical intensity), l j is threshold intensity, and τ is pulse temporal length. Assuming phase (time) accuracy of 1 nsec, the pulse temporal length, τ, can be equal to: 100 nsec = 0. 1 psec. for example. In such a case, for optical impact duration of 10 msec, during which the target is being detected, the number of pulses can be: 10 msec/ 100 nsec = 104 psec/0.1 psee = 105, which is sufficiently large number for coding operations. Eq. (56) can be written as:
T FA R 03)e ^ 2 : x .= 1 1 (57)
2V3 In which can be interpreted as a number of false alarms (signals) per pulse, which is close to BER (bit-error-rate) definition (as a false alarm in the narrow sense) we mean the situation when the noise signal is higher than threshold signal; i.e., decision is made that true signal exists which is not the case). Eq. (57) is tabulated in Table 1 (x = Ιχ/Ιη).
Table 1 . IT/In-Values Versus xFAR
Figure imgf000029_0001
As the table illustrates, for higher threshold values, TFAR decreases.
The second threshold probability is probability of detection. defined as probability that summary signal: Is + ln, is larger than threshold signal, IT: i.e.,
Pd = P('s + In > ΐτ ) (58)
"his probability has the form:
Pd - erf (59)
Figure imgf000030_0001
where z-parameter is
Figure imgf000030_0002
and SNR - I In is signal-to-noise ratio, while N(z) and erf(z) are two functions, well-known in error probability theory, as
Figure imgf000030_0003
Both are tabulated in almost all tables of integrals, where N(x) is called normal probability integral, while erf(x) is called error function, and: N(x) = erf(x/ ). Probability of detection, Pd, and normal probability integral are tabulated in Table 2, where z = (SNR) - x (note that z-value in Table 2 is in Gaussian (normal) probability distribution's dispersion, σ, units; i.e., z = 1 is equivalent to σ, while z = 2, to 2σ, etc.).
Table 2. Probability of Detection as a Function of z = (SNR) - x; x = lT/ln
Figure imgf000030_0004
The signal intensity, Is, is defined by the application and specific components used, as illustrated above, while noise intensity, In, is defined by detector's (electronic) noise and by optical noise. In the case of semiconductor detectors, the noise is defined by so-called specific detectivity, D*, in the form:
Figure imgf000031_0001
where A is detector area (in cm2), B is detector bandwidth (for periodic pulse signal, B = 1/2τ, where τ is pulse temporal length), and (NEP) is so-called Noise Equivalent Power, while
( EP
A
For typical quality detectors, D* > 1012 cmHz1/2W"' . For example, for τ = 100 nsec, B = 5 MHz, and for D* = 1 Q12 cmHz, /2W_ 1 , and A = 5 mm x 5 mm - 0.25 cm2, and 1 / 2 ) 1 / 2
(NEP) = 1-L_ = 10^l 2 (0.5)^103 W = 1.12 - 10^9 W = 1.12 W (64) and In = (1 .12 nW)/0.25 cm2 = 4.48 nW/cm2.
According to Table 2, with increasing x-parameter, the threshold value, 1 1. PD decreases, i.e., the system performance declines. However, with x-parameter increasing, the TFAR value also decreases; i.e., the system performance increases. Therefore, there is trade-off between those two tendencies, while threshold value, IT, is usually located between In and Is-values: In <
IT < ls. From Eq. (58), for Is = IT, z = 0, and Pd(0) = 1/2, while Pd (∞) = ! . Also, FAR (0) = 1 , and FAR (∞) = 0. Therefore, for ideal system (In = 0); FAR = 0, and Pd = 1.
Considering both threshold probabilities: TFAR and Pd, and two parameters: (x, z), we have two functional relations: TFAR (x) and Pd(z), with additional condition: z = (SNR) - x. Therefore, assuming:
1) GIVEN: (SNR) + one probability, we obtain all parameters: (x, z) and remaining probability.
2) GIVEN: both probabilities, we obtain (x, z)-values.
3) GIVEN: k-parameter as fraction: IT = kls, k < 1 + one probability, we obtain all the rest. For example, for known Pj-value, we obtain: z = x(k"'-l); so, we obtain x-parameter value, and then, from Table 1, we obtain TFAR -value. GIVEN: IN, IS => (SNR) and one probability, we obtain all the rest. For illustration of" trade-off between maximization of Pj-probability and minimization of xFA probability, we consider three examples.
* EXAMPLE 1. Assuming (SNR) = 5 and xFAR = 10"4, we obtain x = 3.99, and z≤ 5-4 = 1 ; thus, Pd( 1 ) = 0.84, from Table 2. · EXAMPLE 2. Assuming the same (SNR) = 5 but worse (FAR); xFAR = 10"3, we obtain x = 3.37 and z = 1.63: thus, N(z) = 0.8968 and Pd = 0.95; i.e., we obtain better Pd-value.
From examples (1 ) and (2) we see that increasing of positive parameter, Pj, is at the expense of increasing of negative parameter, xFAR , and vice versa. This trade-off may be improved by increasing the SNR, as shown in example (3).
• EXAMPLE 3. Assuming (SNR) = 8 and xFAR = 10"6, we obtain x = 5.01 and z = 3; thus, Pj = 0.999. We see that by increasing (SNR)-value, we could obtain both excellent values of threshold probabilities: very low xFAR value (10" ) while preserving still high P^-value (99.9%). Of course, for higher Pd-value; e.g., Pd > 99.99%, we have z 4, and from (SNR) = 8, we obtain x = 4; thus xFAR =
10"4; i.e., this negative probability will be larger than previous value (10~6); thus, confirming trade-off rule.
Figure 25 illustrates a method of pulse detection using thresholding implemented in accordance wi th an embodiment of the invention. Figure 25 A illustrates a series of pulses transmitted by a light source in an optical proximity fuze. Figure 25B illustrates the pulse 2502 received after transmission of pulse 2051. As illustrated, noise In results in distortion of the signal. A threshold L 2503 may be established for the detector to register a detected pulse. Accordingly, pulse start time 2504 and end time 2505 may be detected as the time when the wave 2505 crosses the threshold 2503. For a high value of the threshold 2503, Lp, the z.-parameter will be low; thus, probability of detection will be also low, while for a low IT-value 2503, x-parameter will be low; thus, the
False Alarm Rate (FAR) will be high. In some embodiments, a low pass filter is used in the detection system to smooth out the received pulse. Figure 26 illustrates this process. An initially received pulse 2600 has many of its high frequency components removed after passage through a low pass filter, resulting in smoothed wave pulse 2601. This low pass operation results in less ambiguity in the regions 2602 where the pulses cross the threshold value. As the initially transmitted wave pulses do not include components above a certain frequency level, the noise signal intensity, In, may be reduced to a smoothed value, In ' , as in Figure 26. Therefore, the signal-to-noise ratio, (SNR) = Is/ln is increased into new value:
I I
(SNR)'= -^ > (SNR) = -3- . (65)
In In
Therefore, the trade-off between and (FAR) will be also improved. According to Eq. (60),
(SNR) = x + z (66) some embodiments, the x value is increased, with increasing (SNR)-value, due to Eq. (65), order to reduce tFAR -value, as in Eq. (57). This is because, with increasing (SNR)-value, due to the smoothing technique, as in Eq. (65), we can increase x-value, while keeping z-value constant, according to Eq. (66), results in minimizing TFAR -value, due to Eq. (57). For example, if before the smoothing technique, illustrated in Figure 21 , xFAR -value was 10~4, then, with increasing (SNR)-value due to smoothing technique by 1 , x-value could also increase by 1 (while keeping z-value the same). Then, according to Table 1 , xFAR -value will decrease from 10~4 to 10"6, which is a significant improvement of system performance.
In summary, by introducing of the smoothing technique, or low-pass-filtering, we increase (SNR)-value, which, in turn, improves the trade-off between two threshold
probabilities: TFAR and P^. Then, the threshold value, IT is defined by this new, improved trade-off. In a particular embodiment, a procedure of finding thethreshold value, (Ij)0 is as follows.
STEP 1. Provide experimental realization of Figure 25B, in order to determine experimental value of optical intensity, In'.
STEP 2. Determine, by calibration, the conservative signal value, Is, for a given phase of optical impact duration, including: rising phase, maximum phase, and declining phase. Find (SNR)' -value according to Eq. (65): (SNR)' = Is/In'.
STEP 3. Apply relation (66): (SNR)' = x + z, and two definitions of threshold probabilities: Eq. (57) and Eq. (59). Determine required value of xFAR and use approximate Table 1 , or exact relation (57) in order to find x-value: x = l /ln'. Then, the resulted threshold value, Ij, is found. • STEP 4. Using x-value from STEP 3, find z- value from Eq. (66), and then find
PH- alue from approximate Table 2, or exact relation (59). If the resulted P^- value is satisfactory the procedure ends. If not, verify Is-statistics, and/or try to improve smoothing procedure. Then, repeat procedure, starting from STEP 1.
Determining zero-points: tj , t , 13, t4, ... , as in Figure 22 depends on pulse temporal length variation, τ, as in Figure 25 A, defined in the form: ίμ, - ί^ (67) where for i = 2, we have: ¾ - 12 = 12, etc,. Therefore, tj defines zth pulse temporal length which can be varying, or it can be constant for periodic signal:
Tj = constant = τ (68) where Eq. (68) is particular case of Eq. (67).
In the periodic signal case, the precision of the pulse length coding can be very high because it is based on a priori information which is known for the detector circuit, for example, using synchronized detection. However, even in the general case (67), the precision can be still high, since a priori information about variable pulse length can be also known for detector circuit.
In further embodiments, multi-wavelength variable pulse coding may be implemented. Figure 27 illustrates such an embodiment. In a first embodiment 2700, light sources of a plurality of light sources are configured to emit a first wavelength of light 2701 or a second wavelength of light 2702. The light sources operate in a complimentary, or non-overlapping manner, such that different wavelengths 2704 and 2705 are always transmitted at different times. The particular wavelengths and the pulse lengths allow for temporal and wavelength signatures that may be used for false alarm mitigation. In a second embodiment 2710, the light sources operate in an overlapping manner, resulting in times 2706 when both wavelengths are transmitted. As described above, the use of different filters allows both wavelengths to be detected, and the overlapping times provide another signature for false alarm mitigation
Increasing signal, Is, level, is direct way to improve system performance by increasing
(SNR)-value, and; thus, automatically improving the trade-off between two threshold
probabilities discussed above. In some embodiments, an energy harvesting subsystem 2800 may utilized to increase the energy available for the optical proximity detection system. Current drawn from the projectile engine 2803 during flight time \it) is stored in the subsystem 2800 and used during detection. An altitude sensor may be used for determining when the optical proximity fuze should begin transmitting light. Assuming flight length of 2 km and projectile speed of 400 m/sec, we obtain; AtQ = 5 sec, which is G times more than the fuze's necessary time window, W, which is predetermined using a standard altitude sensor (working with accuracy of 100 m, for example). For example, if W = 250 msec, then G = (At0)/W ~ 20. Since the power is drawn from the engine during all the time, Ato, we can cumulate this power during much shorter W-time; thus, increasing Is-signal by G-factor. Therefore, G-factor, defined as:
( At ')
G = ^^ . (69) \V is called Gain Factor. For the above specific example: G = 20, but this value can be increased by reducing W-value, which can be done with increasing altitude sensor accuracy. For example, for W = 50 m and for the same remaining parameters, we obtain G = 40. Consider, for example, that the DC-current dream is 1 A, and nominal voltage is 12 V, then DC-power is 12 W. However, by applying the Gain Factor, G, with G - 20, for example, we obtain the new power of: 20x 12 W = 240 W, which is a very large value. Then, the signal level, Is, will increase proportionally; thus, also (SNR)-value; and we obtain,
(SNR)' = (SNR)(G) (70)
Figure 24 illustrates an energy harvesting subsystem 2800 implemented in accordance with this embodiment. A rechargeable battery 2807 may be combined with a supercapacitor 2805, or either component may be used alone, for temporary electrical energy storage. In a particular embodiment, for example, where electrical charge and spaces for the system are both at a premium, the supercapacitor 2805 is used in combination with the battery 2807. This allows the relative strengths of each system to be utilized.
A harvesting energy management module (HEMM) 2806 controls the distribution of the electrical power, from an engine 2803, Pej. The power is stored in the battery 2807 or supercapacitor 2805 and then, transmitted into the sensor. The electrical energy is stored and accumulated during the flight time At0 (or, during part of this time), while transmitted into the sensor, during window time, W. For example, the HEMM 2806 may draw power from an Engine Electrical Energy (E3) module installed to serve additional sub-systems with power, in a particular embodiment, the battery's 2807 form factor is configured such that its power density is maximized; i.e., the charge electrode proximity (CEP) region should be enlarged as possible. This is because the energy can be quickly stored and retrieved from the CEP region only. As discussed above, the geometry of the optical proximity detection fuze results in a detection signal that first rises in intensity to a maximum value then begins to decline. Figure 28 illustrates this in terms of a optical impact effect (OIE), which is defined, using mean signal intensity (<i>) maximization, when, in time: t = tM;
< / > = < / >M ,for t = tM (71 ) where I = Is + I,,', after signal smoothing, due to low-pass filtering (EPF). The OIE measurement is based on time budget analysis.
In Figure 29, the upper graph 2901 illustrates a trajectory of a projectile. The lower graph 2902 illustrates the means signal intensity received at a photodetector within the optical proximity fuze. The time axis of both graphs is aligned for illustrative purposes. In the illustrated embodiment, the fuze is configured to activate the projectile at a predetermined distance yo 2907. In this embodiment, the activation distance 2907 is aligned with the end of the time window 2906 in which the target can be detected. Flowever, in other embodiments, the predetermined activation distance can be situated at other points within the detection range. The range in which the target can be detected 2909 is determined according to the position of the photodetectors relative to the receiving aperture of the optical proximity fuze. At the start of a detection operation, the optical proximity fuze begins transmitting light towards the target. Light begins being detected by the photodetector at the start of window 2906. As the light spot reflected off the target traverses the photodetector, the mean intensity 2910 increases to a maximum value 2903 and then declines 2904 to a minimum value.
For example, consider Ay = 10 m; then, for v = 400 m/sec, At = 25 msec. Then, y0-value can be also 10 m (a distance from the ground when optical impact occurs), or some other value of the same order of magnitude. In order to define the OIE, we divide this At-time on time decrements, 6t, such that 8y = 4 cm, for example. Then, for the same speed, 5t = 0.1 msec = 100 μ8εε. Therefore, in this example, the number of decrements, during optical impact phase, At, is i 0.1 msec which is sufficient number to provide the effective statistical average (or, mean value) operation, defined, as ft t +5t l(t)dt
< I >= · ' — (73)
5t which can be done either in digital, or in analog domain. The I(t)-function can have various profiles, including pulse length modulation, as discussed above. Then, assuming time average pulse length, τ = 100 nsec = 0.1 μ εο , the total number of pulses per decrement, δΐ, is: 0.1 msec/0.1 usee = 1000.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not of limitation. Likewise, the various diagrams may depict an example architectural or other configuration for the invention, which is done to aid in understanding the features and functionality that can be included in the invention. The invention is not restricted to the illustrated example architectures or configurations, but the desired features can be implemented using a variety of alternative architectures and configurations. Indeed, it will be apparent to one of skill in the art how alternative functional, logical or physical partitioning and configurations can be implemented to implement the desired features of the present invention. Also, a multitude of different constituent module names other than those depicted herein can be applied to the various partitions. Additionally, with regard to flow diagrams, operational descriptions and method claims, the order in which the steps are presented herein shall not mandate that various embodiments be implemented to perform the recited functionality in the same order unless the context dictates otherwise.
Although the invention is described above in terms of various exemplary embodiments and implementations, it should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead can be applied, alone or in various combinations, to one or more of the other embodiments of the invention, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing: the term "including" should be read as meaning "including, without limitation" or the like; the term "example" is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof; the terms "a" or "an" should be read as meaning "at least one," "one or more" or the like; and adjectives such as "conventional," "traditional," "normal," "standard," "known" and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such
technologies encompass those apparent or known to the skilled artisan now or at any time in the future.
The presence of broadening words and phrases such as "one or more," "at least," "but not limited to" or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term "module" does not imply that the components or functionality described or claimed as part of the module are all configured in a common package. Indeed, any or all of the various components of a module, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations. Additionally, the various embodiments set forth herein are described in terms of exemplary block diagrams, flow charts and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration.

Claims

Claims
1. An optical impact control system, comprising: a laser light source configured to emit laser light comprising a plurality of orthogonal wavelengths; a first aperture configured to pass the light from the plurality of laser light sources and to direct the light to a target; a second aperture configured to pass the light reflected off of the target; a photodetector configured to detect the laser light having the plurality of orthogonal wavelengths after the light is passed through the second aperture only if the target is within a predetermined distance range from the optical impact control system.
2. The apparatus of claim 1 , wherein the light from the plurality of laser light sources is temporally multiplexed and wherein the wavelengths of the light are temporally modulated.
3. The apparatus of claim 1 , wherein the light from the plurality of laser light sources is spatially multiplexed.
4. The apparatus of claim 1 , wherein the first aperture is an element of an optical projection system, the optical projection system configured to project the light such that the light is substantially in focus within the predetermined distance range.
5. The apparatus of claim 4, wherein the optical projection system further comprises a cylindrical lens.
6. The apparatus of claim 4, wherein the optical projection system further comprises a collimating lens.
7. The apparatus of claim 1 , wherein the second aperture is an element of an optical imaging system, the optical imaging system configured to image the light such that the light is substantially in focus when reflected from the target when the target is within the predetermined distance range.
8. The apparatus of claim 7, wherein the optical imaging system further comprises a cylindrical lens.
9. The apparatus of claim 1 , wherein the photodetector comprises a non-position sensitive photodiode coupled to a detection circuit.
10. The apparatus of claim 1 , wherein the photodetector comprises a position sensitive photodiode coupled to a detection circuit, wherein the photodetector is configured to detection position by measuring an area of an active region of the photodiode that is illuminated by the reflected light compared to the total area of the active region.
1 1. The apparatus of claim 1 , wherein the photodetector comprises an array of photodiodes coupled to a detection circuit.
12. The apparatus of claim 1, further comprising an ogive housing the laser light source, the first aperture, the second aperture, and the photodetector; and wherein the photodetector is an element of an array of photodetectors positioned in an axially symmetric manner on the ogive.
13. The apparatus of claim 1 , further comprising: an ogive comprising a first ogive portion and a second ogive portion; a first separating means for separating the ogive from a projectile; and a second separating means for separating the first ogive portion from the second ogive portion; and wherein the first ogive portion houses the laser light source and the first aperture, and the second ogive portion houses the photodetection and the second aperture.
14. A munition system, comprising: a projectile; and an optical impact control system coupled to the projectile and configured to transmit a target detection signal to the projectile; wherein the optical impact control system comprises: a laser light source configured to emit laser light comprising a plurality of orthogonal wavelengths; a first aperture configured to pass the light from the plurality f laser light sources and to direct the light to a target; a second aperture configured to pass the light reflected off of the target; a photodetector configured to detect the laser light having the plurality of orthogonal wavelengths after the light is passed through the second aperture only if the target is within a predetermined distance range from the optical impact control system.
15. The system of claim 14, wherein the light from the plurality of laser light sources is temporally multiplexed and wherein the wavelengths of the light are temporally modulated.
16. The system of claim 14, wherein the light from the plurality of laser light sources is spatially multiplexed.
17. The system of claim 14, wherein the first aperture is an element of an optical projection system, the optical projection system configured to project the light such that the light is substantially in focus within the predetermined distance range.
18. The system of claim 17, wherein the optical projection system further comprises a cylindrical lens.
19. The system of claim 17, wherein the optical projection system further comprises a collimating lens.
20. The system of claim 14, wherein the second aperture is an element of an optical imaging system, the optical imaging system configured to image the light such that the light is substantially in focus when reflected from the target when the target is within the predetermined distance range.
21. The system of claim 20, wherein the optical imaging system further comprises a cylindrical lens.
22. The system of claim 14, wherein the photodetector comprises a non-position sensitive photodiode coupled to a detection circuit.
23. The system of claim 14, wherein the photodetector comprises a position sensitive photodiode coupled to a detection circuit, wherein the photodetector is configured to detection position by measuring an area of an active region of the photodiode that is illuminated by the reflected light compared to the total area of the active region.
24. The system of claim 14, wherein the photodetector comprises an array of photodiodes coupled to a detection circuit.
25. ' he system of claim 14, further comprising an ogive housing the laser light source, the first aperture, the second aperture, and the photodetector; and wherein the photodetector is an element of an array of photodetectors positioned in an axially symmetric manner on the ogive.
26. The system of claim 14, further comprising: an ogive comprising a first ogive portion and a second ogive portion; a first separating means for separating the ogive from a projectile; and a second separating means for separating the first ogive portion from the second ogive portion; and wherein the first ogive portion houses the laser light source and the first aperture, and the second ogive portion houses the photodetection and the second aperture.
PCT/US2010/057167 2009-11-30 2010-11-18 Optical impact control system WO2011066164A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US26527009P 2009-11-30 2009-11-30
US61/265,270 2009-11-30
US12/916,147 2010-10-29
US12/916,147 US8378277B2 (en) 2009-11-30 2010-10-29 Optical impact control system

Publications (1)

Publication Number Publication Date
WO2011066164A1 true WO2011066164A1 (en) 2011-06-03

Family

ID=43500071

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/057167 WO2011066164A1 (en) 2009-11-30 2010-11-18 Optical impact control system

Country Status (3)

Country Link
US (1) US8378277B2 (en)
TW (1) TW201207354A (en)
WO (1) WO2011066164A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015162062A1 (en) * 2014-04-25 2015-10-29 Thales Proximity fuze, and projectile provided with such a proximity fuze
CN112099226A (en) * 2020-03-06 2020-12-18 中国工程物理研究院激光聚变研究中心 Laser beam guiding method for aiming of silk target
RU2783734C1 (en) * 2022-02-15 2022-11-16 Федеральное государственное казённое военное образовательное учреждение высшего образования "Военная академия воздушно-космической обороны имени Маршала Советского Союза Г.К. Жукова" Министерства обороны Российской Федерации Method for generating mismatch parameters in the radio-electronic control system of an air-to-air missile when it is homing to a given type of aircraft with a turbojet engine from their heterogeneous pair under the influence of speed-shifting interference

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8886038B1 (en) * 2011-04-29 2014-11-11 Bae Systems Information And Electronic Systems Integration Inc. Weighted waveforms for improved jam code effectiveness
US9366752B2 (en) * 2011-09-23 2016-06-14 Apple Inc. Proximity sensor with asymmetric optical element
US20150097951A1 (en) * 2013-07-17 2015-04-09 Geoffrey Louis Barrows Apparatus for Vision in Low Light Environments
RU2538645C1 (en) * 2013-10-15 2015-01-10 Открытое акционерное общество "Конструкторское бюро приборостроения им. академика А.Г. Шипунова" Method of extending area of applicability of coned-bore rocket and coned-bore rocket implementing method
US10295658B2 (en) 2014-10-02 2019-05-21 The Johns Hopkins University Optical detection system
US9585867B2 (en) 2015-08-06 2017-03-07 Charles Everett Ankner Cannabinod formulation for the sedation of a human or animal
IL240777B (en) * 2015-08-23 2019-10-31 Ispra Ltd Firearm projectile usable as hand grenade
US20170336510A1 (en) * 2016-03-18 2017-11-23 Irvine Sensors Corporation Comprehensive, Wide Area Littoral and Land Surveillance (CWALLS)
TWI646329B (en) * 2016-10-18 2019-01-01 國立高雄科技大學 Impact device and its launching warhead
US10539403B2 (en) 2017-06-09 2020-01-21 Kaman Precision Products, Inc. Laser guided bomb with proximity sensor
US11300383B2 (en) * 2019-08-05 2022-04-12 Bae Systems Information And Electronic Systems Integration Inc. SAL seeker glint management
RU2762176C1 (en) * 2020-07-22 2021-12-16 Самсунг Электроникс Ко., Лтд. Device for expanding an optical radiation beam and method for expanding an optical radiation beam for coherent illumination
US11662511B2 (en) 2020-07-22 2023-05-30 Samsung Electronics Co., Ltd. Beam expander and method of operating the same

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3483821A (en) * 1966-11-04 1969-12-16 Us Army Standoff fire-control system (u)
US3837283A (en) * 1973-08-03 1974-09-24 Us Army Active optical fuze
US4733609A (en) * 1987-04-03 1988-03-29 Digital Signal Corporation Laser proximity sensor
US4996430A (en) * 1989-10-02 1991-02-26 The United States Of America As Represented By The Secretary Of The Army Object detection using two channel active optical sensors
US5142985A (en) * 1990-06-04 1992-09-01 Motorola, Inc. Optical detection device
US5601024A (en) * 1989-11-14 1997-02-11 Daimler-Benz Aerospace Ag Optical proximity fuse

Family Cites Families (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3060857A (en) * 1943-04-19 1962-10-30 Bell Telephone Labor Inc Proximity fuze with electro-optical apparatus
US3064578A (en) * 1944-12-13 1962-11-20 Joseph E Henderson Light-sensitive proximity fuze
US3860199A (en) 1972-01-03 1975-01-14 Ship Systems Inc Laser-guided projectile system
US3786757A (en) * 1972-06-22 1974-01-22 Raytheon Co Optical lens arrangement
US3782667A (en) * 1972-07-25 1974-01-01 Us Army Beamrider missile guidance method
US3838645A (en) * 1972-10-31 1974-10-01 Us Army Proximity fuze improvement
US6078606A (en) * 1975-03-17 2000-06-20 Lockheed Martin Corporation Multi-color, multi-pulse laser
US4231533A (en) * 1975-07-09 1980-11-04 The United States Of America As Represented By The Secretary Of The Air Force Static self-contained laser seeker system for active missile guidance
US4153224A (en) * 1976-01-29 1979-05-08 Westinghouse Electric Corp. Laser command guidance system
US4098191A (en) * 1976-07-09 1978-07-04 Motorola, Inc. Passive optical proximity fuze
US7673565B1 (en) * 1976-10-14 2010-03-09 Bae Systems Plc Infra red proximity fuzes
US4146327A (en) 1976-12-27 1979-03-27 Autech Optical triangulation gauging system
US4245560A (en) * 1979-01-02 1981-01-20 Raytheon Company Antitank weapon system and elements therefor
US4373804A (en) 1979-04-30 1983-02-15 Diffracto Ltd. Method and apparatus for electro-optically determining the dimension, location and attitude of objects
US4259009A (en) * 1979-07-30 1981-03-31 The United States Of America As Represented By The Secretary Of The Navy Far field target designators
DE3047678A1 (en) * 1980-03-14 1981-09-24 Naamloze Vennootschap Philips' Gloeilampenfabrieken, Eindhoven METHOD FOR COMBATING TARGETS BY MEANS OF PASSIVE PROJECTILES AND LAUNCHING SYSTEM FOR CARRYING OUT THE METHOD
US4310760A (en) * 1980-05-27 1982-01-12 The United States Of America As Represented By The Secretary Of The Army Optical fuze with improved range function
DE3215845C1 (en) 1982-04-28 1983-11-17 Eltro GmbH, Gesellschaft für Strahlungstechnik, 6900 Heidelberg Distance sensor for a projectile igniter
DE3634724A1 (en) 1986-10-11 1988-04-28 Wolfgang Brunk METHOD AND DEVICE FOR CONTACTLESS OPTICAL MEASUREMENT OF TRAILS, ESPECIALLY IN THE TRIANGULATION METHOD
SE458480B (en) * 1986-12-11 1989-04-03 Bofors Ab DEVICE IN ZONUS FOR PUSHING UNITS, INCLUDING TRANSMITTERS AND RECEIVERS FOR OPTICAL RADIATION
US4859054A (en) * 1987-07-10 1989-08-22 The United States Of America As Represented By The United States Department Of Energy Proximity fuze
SE466821B (en) * 1987-09-21 1992-04-06 Bofors Ab DEVICE FOR AN ACTIVE OPTICAL ZONRER AASTADKOMMA HIGHLIGHTS OF LIGHTENING AGAINST RETURNS, SMOKE, CLOUDS ETC
CA1307051C (en) 1988-02-26 1992-09-01 Paolo Cielo Method and apparatus for monitoring the surface profile of a moving workpiece
US4770482A (en) * 1988-07-17 1988-09-13 Gte Government Systems Corporation Scanning system for optical transmitter beams
CH681111A5 (en) 1990-07-30 1993-01-15 Eidgenoess Munitionsfab Thun
US5221809A (en) 1992-04-13 1993-06-22 Cuadros Jaime H Non-lethal weapons system
US5613650A (en) * 1995-09-13 1997-03-25 Kabushiki Kaisha Toshiba Guided missile
US5912738A (en) 1996-11-25 1999-06-15 Sandia Corporation Measurement of the curvature of a surface using parallel light beams
US6145784A (en) * 1997-08-27 2000-11-14 Trw Inc. Shared aperture dichroic active tracker with background subtraction
US6343766B1 (en) * 1997-08-27 2002-02-05 Trw Inc. Shared aperture dichroic active tracker with background subtraction
US6279478B1 (en) 1998-03-27 2001-08-28 Hayden N. Ringer Imaging-infrared skewed-cone fuze
US6298787B1 (en) 1999-10-05 2001-10-09 Southwest Research Institute Non-lethal kinetic energy weapon system and method
US6302355B1 (en) * 1999-11-02 2001-10-16 Bae Systems Integrated Defense Solutions Inc. Multi spectral imaging ladar
DE10026534A1 (en) * 2000-05-27 2002-02-28 Diehl Munitionssysteme Gmbh Laser distance measuring device for an igniter
US6624899B1 (en) 2000-06-29 2003-09-23 Schmitt Measurement Systems, Inc. Triangulation displacement sensor
DE10162136B4 (en) 2001-12-18 2004-10-14 Diehl Munitionssysteme Gmbh & Co. Kg Missile to be fired from a tube with an over-caliber tail unit
EP1502224B1 (en) 2002-04-15 2012-11-21 Robert Bosch Company Limited Constructing a waveform from multiple threshold samples
US6762427B1 (en) 2002-12-20 2004-07-13 Delphi Technologies, Inc. Object surface characterization using optical triangulaton and a single camera
US6722283B1 (en) 2003-02-19 2004-04-20 The United States Of America As Represented By The Secretary Of The Army Controlled terminal kinetic energy projectile
US7183966B1 (en) * 2003-04-23 2007-02-27 Lockheed Martin Corporation Dual mode target sensing apparatus
US7002699B2 (en) 2004-02-23 2006-02-21 Delphi Technologies, Inc. Identification and labeling of beam images of a structured beam matrix
DE102004029343B4 (en) * 2004-06-17 2009-04-30 Diehl Bgt Defence Gmbh & Co. Kg Guidance device for an aircraft
FR2873438B1 (en) 2004-07-23 2006-11-17 Tda Armements Sas Soc Par Acti METHOD AND SYSTEM FOR ACTIVATION OF THE LOAD OF AMMUNITION, AMMUNITION COMPRISING A HIGH-PRECISION ACTIVATION DEVICE AND SYSTEM FOR NEUTRALIZATION OF A TARGET
DE102005002189B4 (en) 2005-01-17 2007-02-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device for determining the angular position of a light beam and method for operating a device for determining the angular position of a light beam
FR2885213B1 (en) 2005-05-02 2010-11-05 Giat Ind Sa METHOD FOR CONTROLLING A MUNITION OR SUB-MUNITION, ATTACK SYSTEM, MUNITION AND DESIGNER EMPLOYING SUCH A METHOD
US7773202B2 (en) 2005-06-09 2010-08-10 Analog Modules, Inc. Laser spot tracker and target identifier
CA2536411C (en) 2006-02-14 2014-01-14 Lmi Technologies Inc. Multiple axis multipoint non-contact measurement system
US7554076B2 (en) 2006-06-21 2009-06-30 Northrop Grumman Corporation Sensor system with modular optical transceivers
IL187637A (en) 2007-11-26 2014-11-30 Israel Aerospace Ind Ltd Proximity to target detection system and method
US8757064B2 (en) 2008-08-08 2014-06-24 Mbda Uk Limited Optical proximity fuze

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3483821A (en) * 1966-11-04 1969-12-16 Us Army Standoff fire-control system (u)
US3837283A (en) * 1973-08-03 1974-09-24 Us Army Active optical fuze
US4733609A (en) * 1987-04-03 1988-03-29 Digital Signal Corporation Laser proximity sensor
US4996430A (en) * 1989-10-02 1991-02-26 The United States Of America As Represented By The Secretary Of The Army Object detection using two channel active optical sensors
US5601024A (en) * 1989-11-14 1997-02-11 Daimler-Benz Aerospace Ag Optical proximity fuse
US5142985A (en) * 1990-06-04 1992-09-01 Motorola, Inc. Optical detection device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015162062A1 (en) * 2014-04-25 2015-10-29 Thales Proximity fuze, and projectile provided with such a proximity fuze
FR3020455A1 (en) * 2014-04-25 2015-10-30 Thales Sa PROXIMITY FUSE, AND PROJECTILE EQUIPPED WITH SUCH A PROXIMITY FUSEE
US10234255B2 (en) 2014-04-25 2019-03-19 Thales Proximity fuze, and projectile provided with such a proximity fuze
CN112099226A (en) * 2020-03-06 2020-12-18 中国工程物理研究院激光聚变研究中心 Laser beam guiding method for aiming of silk target
CN112099226B (en) * 2020-03-06 2022-02-08 中国工程物理研究院激光聚变研究中心 Laser beam guiding method for aiming of silk target
RU2783734C1 (en) * 2022-02-15 2022-11-16 Федеральное государственное казённое военное образовательное учреждение высшего образования "Военная академия воздушно-космической обороны имени Маршала Советского Союза Г.К. Жукова" Министерства обороны Российской Федерации Method for generating mismatch parameters in the radio-electronic control system of an air-to-air missile when it is homing to a given type of aircraft with a turbojet engine from their heterogeneous pair under the influence of speed-shifting interference

Also Published As

Publication number Publication date
TW201207354A (en) 2012-02-16
US20120211591A1 (en) 2012-08-23
US8378277B2 (en) 2013-02-19

Similar Documents

Publication Publication Date Title
US8378277B2 (en) Optical impact control system
US7436493B2 (en) Laser designator for sensor-fuzed munition and method of operation thereof
US5277113A (en) Optical detection device
US8208130B2 (en) Laser designator and repeater system for sensor fuzed submunition and method of operation thereof
US5942716A (en) Armored vehicle protection
US6770865B2 (en) Systems, methods, and devices for detecting light and determining its source
WO2003093757A1 (en) Method for protecting an aircraft against a threat that utilizes an infrared sensor
US7417582B2 (en) System and method for triggering an explosive device
US5831724A (en) Imaging lidar-based aim verification method and system
AU2014282795B2 (en) Threat warning system integrating flash event and transmitted laser detection
US4269121A (en) Semi-active optical fuzing
EP2232300B1 (en) Proximity to target detection system and method
US4819561A (en) Sensor for attacking helicopters
GB1605301A (en) Fuzing systems for projectiles
EP2942597B1 (en) An active protection system
RU2373482C2 (en) Method of protecting armored vehicles
RU2121646C1 (en) Ammunition for suppression of opticoelectron facilities
Paleologue Active infrared systems: possible roles in ballistic missile defense?
US7781721B1 (en) Active electro-optic missile warning system
Gogoi et al. Testing and Evaluation of High Energy Portable Laser Source used as a Target Designator along with a Laser Seeker.
RU2503921C2 (en) Rocket missile
RU2634798C1 (en) Method of protecting helicopter from guided munition
RU2500979C2 (en) Jet projectile fuse optical unit
GB1605302A (en) Fire control systems
Leslie et al. Surveillance, detection, and 3D infrared tracking of bullets, rockets, mortars, and artillery

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10782773

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10782773

Country of ref document: EP

Kind code of ref document: A1