WO2006115470A1 - Multiple angle display produced from remote optical sensing devices - Google Patents

Multiple angle display produced from remote optical sensing devices Download PDF

Info

Publication number
WO2006115470A1
WO2006115470A1 PCT/US2003/008951 US0308951W WO2006115470A1 WO 2006115470 A1 WO2006115470 A1 WO 2006115470A1 US 0308951 W US0308951 W US 0308951W WO 2006115470 A1 WO2006115470 A1 WO 2006115470A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel array
ground area
satellite
camera
Prior art date
Application number
PCT/US2003/008951
Other languages
French (fr)
Inventor
Jerry C. Nims
Paul F. Peters
William M. Karszes
Original Assignee
Orasee Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orasee Corp. filed Critical Orasee Corp.
Priority to AU2003224751A priority Critical patent/AU2003224751A1/en
Publication of WO2006115470A1 publication Critical patent/WO2006115470A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/27Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/211Image signal generators using stereoscopic image cameras using a single 2D image sensor using temporal multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses

Definitions

  • This invention relates generally to the collection and presentation of optical information and, more particularly, to the acquisition, processing, and hard copy presentation of optical information obtained from a plurality of viewing or sensing angles.
  • optical information is integral to a variety of activities. These activities include, without limitation, airborne and satellite surveillance and monitoring of areas of interest.
  • the collected information is typically digitized on the platform on which the cameras or other optical sensors are mounted, pre-processed and sent by downlink for further processing.
  • the information is often formatted and printed for visual inspection as well. For example, aerial photographs may be studied by skilled persons, both to identify items missed by computer recognition methods and to obtain further information not conveniently or accurately obtained by computer methods.
  • the 2-dimensional picture has shortcomings. One is that it is 2- dimensional, which has aesthetic and related functional drawbacks. More particularly, the viewer does not obtain a sense of depth from a 2-dimensional picture, and this failure may cause a misinterpretation of information that a 3- dimensional view would have provided.
  • Another shortcoming with existing art airborne and satellite surveillance systems, and the hard copy images they produce, is that the images show the photographed ground area only as seen from one position and viewing angle that it was originally obtained. For example, a photograph of a ten-foot diameter hole obtained by overflying it with a camera looking down at an angle of 45 degrees with respect to a flight path may fail to present an image of the contents of the hole.
  • One possible solution to the above example problem is to fly over the item of interest, i.e., the hole, twice, with the camera looking straight down on the second flyover.
  • Other possible solutions include mounting a plurality of cameras on the airborne or satellite platform, or mounting a camera on a steerable gimbal, and thereby obtain a plurality of pictures of a particular ground area, each from a different viewing angle.
  • the viewer may have a hard copy of a first picture of a ground area, taken from a first airborne surveillance viewing angle, in which a building of interest is situated in, for example, the upper left corner of the copy.
  • a second picture of the same ground area, taken from a second viewing angle may show the building in its upper right corner.
  • Still another problem, which relates to the previously identified problem is that the viewer must change his or her visual focus continually, namely by looking at the pictures taken from one viewing angle and then looking at the pictures taken from another viewing angle. This can be inconvenient. It also increases the probability of human error, as the user must remember how something looked from one viewing angle as he or she shifts attention to another hard copy showing how the item appeared from another viewing angle.
  • the existing art does provide a type of stereoscopic visual surveillance method, in which two frames of information are captured via satellite and transmitted to, for example, the National Reconnaissance Office (NRO).
  • NRO National Reconnaissance Office
  • Printable images of the left and right frames are then generated, one being polarized orthogonal to the other, arranged above one another and printed.
  • the user wears polarizing glasses, whereby his or her left eye sees the left image and his or her right eye sees the right image. The user thus sees an apparent three-dimensional image.
  • each polarized image pair shows, and is limited to, the ground area of interest as seen from -an oblique viewing angle.
  • a typical stereoscopic image is formed by mounting two cameras on a satellite. One camera is ahead of the satellite, at a depression angle toward the earth. The other camera looks behind the satellite, at the same depression angle. Therefore, the left image and the right image are each obtained by looking at the ground area of interest at an oblique viewing angle. For this reason the prior art stereoscopic image does not have a direct look-down angle. This can have significant results.
  • FIG. 1 shows a simulated example of an image of a building 2 as if viewed in its original polarized format through polarized glasses.
  • FIG. 4 is a microjens hard copy of the same building, displaying two three-dimensional views generated and fixed in hard copy in accordance with a method of the present invention.
  • FIG. 1 does not reveal any object proximal to the building 2.
  • One of the two viewing angles provided by FIG. 4, though, reveals a missile 4.
  • the missile 4 cannot be seen in FIG. 1 because it is close against a side wall of the building 2, and the FIG. 1 stereoscopic image was obtained from oblique viewing angles.
  • the present invention advances the art and overcomes the problems identified above by placing on a single microlens sheet images of an object or ground area as seen from a remote distance at multiple viewing angles, such that the viewer can move the sheet to see the object from any of the viewing angles.
  • the original images are taken from the remote distance, and then processed, formatted and fixed to the microlens sheet such that the user will see a plurality of three-dimensional images.
  • the microlens sheet may comprise a plurality of semi-cylindrical or similar cross- section transparent lenses, lying in a plane and extending parallel to one another.
  • the original images are obtained by, for example, one or more optical detectors mounted on an aerial or space-borne platform.
  • the optical detectors may detect visible or non-visible frequency bands, or combinations of the same.
  • the optical detectors may be steerable or pointable, either by commands entered local to the platform or by uplink.
  • the optical detectors may include a telephoto lens, with or without a zoom feature.
  • a first embodiment of the invention is a method including flying the platform, or orbiting it, along a platform path over a ground area of interest.
  • a first detection image is detected by the optical detector from a first position on the platform path such that the ground area of interest is in the first image's field of view.
  • a second detection image is detected by the optical detector when the platform is at a second position on the platform path.
  • a third detection image is detected. The second detection image and the third detection image are detected such that the ground area of interest is the field of each.
  • a first digital pixel array representing the first detection image is input to a data processor.
  • a second digital pixel array representing the second detection image, and a third digital pixel array representing the third detected image are input to the data processor.
  • the data processor generates an output interphased or interlaced digital pixel array based on the first digital pixel array, the second digital pixel array, and the third pixel array.
  • a visible interphased image is printed on a printable surface of a hard copy sheet, the printed image being based on the output interphased digital pixel array, and a microlens sheet is overlaid onto the printable surface.
  • the output interphased digital pixel array is generated, and the visible interphased image is printed such that when the microlens sheet is overlaid the user sees, from a first viewing position, a first three-dimensional image based on the first and second detected images and, from a second viewing position, sees a second three-dimensional image based on the second and third detected images.
  • the first three-dimensional image shows a surveillance line of sight extending from the ground area of interest to a point on the platform path halfway between the first viewing position and the second viewing position.
  • the second three-dimensional image shows a surveillance line of sight extending from the ground area of interest to a point on the platform path halfway between the second viewing position and the third viewing position. Since the first three-dimensional image and the second three- dimensional image each include a direct down view, each provides a view into holes, between buildings and the like.
  • the microlens sheet comprises a plurality of semi-cylindrical or similar cross-section transparent lenses, lying in a plane and extending parallel to one another.
  • a rotation axis lies in the plane and extends in a direction parallel to the lenses.
  • the first orientation and position includes a first rotation of the hard copy sheet about the rotation axis and the second orientation and position includes a second rotation of the hard copy sheet about the rotation axis.
  • the microlens sheet comprises a plurality of lenses, each having a circular or elliptical circumference, and each having a hemispherical or aspherical cross-section.
  • a first rotation axis lies in the plane, and a second rotation axis lies in the plane and extends normal to the first rotation axis.
  • the first orientation and position includes a first rotation of the hard copy sheet about the first rotation axis and the second orientation and position includes a second rotation of the hard copy sheet about the first rotation axis.
  • the microlens sheet comprises a plurality of lenses, each having a circular or elliptical circumference, and each having a hemispherical or aspherical cross-section, according to the previously identified aspect.
  • a fourth detection image is detected from a fourth viewing position spaced laterally in a first direction from the first platform path, with the ground area of interest being in the third detection image field.
  • a fourth digital pixel array representing the fourth detection image is input to the data processor.
  • the data processor generates an output interphased or interlaced digital pixel array based on the first digital pixel array, the second digital pixel array, the third pixel array, and the fourth pixel array.
  • the output interphased digital pixel array is further generated, the visible interphased image is printed, and the microlens sheet is overlaid such that when the user rotates the microlens sheet between a first rotation position and a second rotation position about the second rotation axis, the user sees a visual three-dimensional image based, at least in part, on the fourth detected image.
  • a further embodiment of the invention is a method including flying the platform along a platform path above a ground area of interest.
  • a first left detection image is detected by the optical detector from a first position on the platform path.
  • a first right detection image is detected by the optical detector from a second position on the platform path.
  • a second left detection image is detected by the optical detector from a third position on the platform path.
  • a second right detection image is detected by the optical detector from a fourth position on the platform path.
  • a first digital pixel array representing the first detection image is input to a data processor.
  • a second digital pixel array representing the second detection image, and a third digital pixel array representing the third detected image are input to the data processor.
  • the data processor generates an output interphased or interlaced digital pixel array based, on the first digital pixel array, the second digital pixel array, and the third pixel array.
  • a visible interphased image is printed on a printable surface of a hard copy sheet, the printed image being based on the output interphased digital pixel array, and a microlens sheet is overlaid onto the printable surface.
  • the output interphased digital pixel array is generated, and the visible interphased image is printed such that when the microlens sheet is overlaid the user sees, from a first viewing position, a first three-dimensional image based on the first and second detected images and, from a second viewing position, sees a second three- dimensional image based on the second and third detected images.
  • the platform path is any of a curved path, semicircular or circular path, or combination of such paths, about a surveillance axis extending normal to the ground area of interest.
  • the microlens sheet comprises a plurality of lenses, each having a circular or elliptical circumference, and each having a hemispherical or aspherical cross- section, according to the previously identified aspect.
  • a fourth detection image is detected and when the platform is at a fourth position on the platform path. The fourth detection image is such that the ground area of interest is in its field.
  • a fourth digital pixel array representing the fourth detection image is input to the data processor.
  • the data processor generates an output interphased or interlaced digital pixel array based on the first digital pixel array, the second digital pixel array, the third pixel array, and the fourth pixel array.
  • the output interphased digital pixel array is further generated, the visible interphased image is printed, and the microlens sheet is overlaid such that when the user rotates the microlens sheet between a first rotation position and a second rotation position about the second rotation axis, the user sees a visual three-dimensional image based, at least in part, on the fourth detected image.
  • An objective of the present invention is to convert remotely acquired images into one or more hard copy motion, or hard copy multidimensional images and/or display devices, to dramatically improve visual intelligence.
  • the hard copy information can then be used, for example, in briefings and distributed to those involved, e.g., pilots, special ops, before carrying out a mission.
  • the micro lens sheet has been specifically designed to map special algorithms that maintain the geometric and physical properties of light waves. This is accomplished by interphasing the light waves in the precise formulas that are designed to fit the specific requirements of the particular discipline being used.
  • the micro lenses are also designed to transmit back to the human visual system the light waves that closely replicate the original scene or object.
  • the micro lens sheet thereby serves as a method of storage and translation of reconnaissance information.
  • the micro lens sheet permits the viewer to see a multidimensional projection or a "360 degree look-around" view including, for example, height of objects including, for example, buildings, mountains, fires, etc. without the aid of further processing and display equipment.
  • FIG. 1 is an example picture of a building generated for viewing through a polarization-based three-dimensional viewing system
  • FIG. 2 depicts an example surveillance system according to the present invention
  • FIG. 3 shows an example flow chart of a first method according to the present invention.
  • FIG. 4 is a microlens hard copy showing two three-dimensional views of the building shown in FIG. 1 , generated in accordance with the methods of the present invention.
  • FIG. 2 shows an example of a surveillance system for carrying out the method of the present invention.
  • the FIG. 2 system comprises a low earth orbit (LEO) satellite 10, shown at position 12a, position 12b, position 12c and position 12c along an orbit path 14.
  • LEO low earth orbit
  • Mounted on the satellite 10 is a forward-looking camera 16, a nadir camera 18, and rearward-looking camera 20.
  • VNIR visual near-infrared
  • the forward camera 16 has a line of sight 16L pointing down from the orbit tangent line TL at an angle THETA.
  • the line of sight 18L of the nadir camera 18 points directly down, and the rearward camera 20 has a line of sight 2OL that points down at angle of minus THETA.
  • the forward camera 16 is shown at position 12a as covering a field of view FV on the ground.
  • the nadir camera 18 has a field of view covering the same area FV.
  • the field of view of the rearward camera 20 is the area FV.
  • the above-described example platform is known in the art and therefore a further detailed description is not necessary.
  • Other known platforms and arrangements of cameras are known in the art and may be used.
  • the cameras 16, 18 and 20 could be for bandwidths other than VNIR, examples being panchromatic and shortwave infrared (SWIR). Additional cameras may be mounted on the platform as well.
  • SWIR panchromatic and shortwave infrared
  • FIG. 1 Also shown in FIG. 1 is a ground station 30, a processing station 32, a communication link 34 between the ground station 30 and the processing station 32, and an inkjet printer 36.
  • An uplink 38 carries command and control signals from the ground station 30 to the satellite 10
  • a downlink 40 carries camera sensor data, described below, and satellite status information.
  • the ground station 30, the uplink 38 and the downlink 40 are in accordance with the known art of satellite communication and control and, therefore, description is not necessary for understanding or practicing this invention.
  • FIG. 3 shows an example flow chart for a first embodiment of the invention, and an example operation will be described in reference to the system illustrated by FIG. 2.
  • the example describes only generating and displaying, according to this invention, a multi-view image of one ground area, labeled FV.
  • a multi-view image would likely be generated of a plurality of ground areas LV and, as will be understood by one of ordinary skill in the field of satellite and airborne surveillance, such images can be obtained by repeating the FIG. 3 method.
  • UPDATE commands are first sent by uplink at block 100.
  • block 100 is for illustrative purposes only, and need not be repeated each time that image sensing data is to be collected.
  • the timing, specific content, protocols, and signal characteristics of the UPDATE commands are specific to the particular kind and configuration of the satellite 10 and the ground station 30, and such commands are readily implemented by persons skilled in the art of satellite controls.
  • SENSOR LEFT data is collected by the ground station 30 from camera 16 when the satellite is position 12a.
  • SENSOR CENTER data is collected from camera 18.
  • SENSOR RIGHT data is collected from camera 20. It will be understood that blocks 102, 104 and 106 are not necessarily performed as separate data collection steps.
  • the SENSOR LEFT, SENSOR CENTER and SENSOR RIGHT data may be multiplexed onto a single data stream and continually collected during a time interval that includes the times that the satellite is at positions 12a, 12b and 12c. Further, the collection is not necessarily performed at the ground station 30, because other ground receiving stations (not shown) may receive the data downlink from the satellite 10. Such arrangements of ground stations and data collection stations are known in the art. Still further, the collection steps 102, 104 and 106 may include retransmission through ground repeaters (not shown), as well as encryption and decryption, and land-line transmissions. These data transfer methods and protocols are known in the art.
  • step 108 ⁇ formats the data, sends it over the link 34 and inputs it to a data processor, shown as item 32 in FIG. 2.
  • the link 34 may be the Internet and, accordingly, the formatting, transfer and input may further include data and data network transmissions such as, for example, a File Transfer Protocol (FTP) transfer. Further, the link 34 is shown for purposes of example only.
  • the data processor 32 may be local to the ground station 30, or to any other ground receiving station.
  • the data processor 32 can be any of a large variety of standard commercially available general purpose programmable digital computers (not shown) having, for example, a standard protocol digital input port, a microprocessor, operating system storage, operating system software stored in same, application program storage, data storage, a standard protocol digital output port and, preferably, a user interface, and a video screen.
  • An example computer is a Dell® model Optiplex® GX 150 having a 1 GHz Intel® Pentium® III or Celeron® microprocessor, 528 MByte RAM, a 60 GByte hard drive, a 19 inch conventional cathode ray tube (CRT) video display, and a standard keyboard and mouse for user entry of data and commands, running under Microsoft Windows 2000® or Windows XP® operating system.
  • a Dell® model Optiplex® GX 150 having a 1 GHz Intel® Pentium® III or Celeron® microprocessor, 528 MByte RAM, a 60 GByte hard drive, a 19 inch conventional cathode ray tube (CRT) video display, and a standard keyboard and mouse for user entry of data and commands, running under Microsoft Windows 2000® or Windows XP® operating system.
  • step 110 reformats the SENSOR LEFT, SENSOR CENTER and SENSOR RIGHT data into a three M XN pixel arrays, which are labeled for reference purposes as LeftPixelArray, CenterPixelArray and RightPixelArray.
  • the step 110 reformatting is based on a predetermined, user-input MicroLensData which characterizes physical parameters of the microlens sheet on which the final image set will be printed.
  • Step 110 may also be based on a PrinterResData characterizing performance parameters of the printer 36, particularly the printer's resolution in, for example, dots per inch (DPI).
  • DPI dots per inch
  • Step 110 uses this LPI data, and the PrinterResData, to convert the SENSOR LEFT, SENSOR RIGHT and SENSOR CENTER data into N x M pixel arrays LeftPixelArray, CenterPixelArray and RightPixelArray, with N and M selected to place an optimal number of printed pixels under each lenticule.
  • the pixel resolution of the nadir camera 18 may differ from the pixel resolution of the forward and rearward looking cameras 16 and 20.
  • the pixel resolution may differ in terms of the number of pixels generated by the camera, and by the ground area represented by each pixel.
  • One reason for the latter is that the image field of the forward and rearward cameras 16 and 20, in terms of the total ground area, is typically larger than that covered by the nadir camera 18.
  • each pixel generated by the nadir camera 18 may represent 5 meters by 5 meters of ground area, while each pixel generated by the cameras 16 and 20 may represent, for example, 8 meters by 8 meters.
  • image mapping algorithms may be used. Such algorithms are well-known in the art.
  • the images represented by LeftPixelArray, CenterPixelArray and RightPixelArray are of the same ground area LV, viewed from the three positions shown in FIG. 2 as 12a, 12b, and- 12c.
  • the difference between the LeftPixelArray and CenterPixelArray is a parallax equivalent to a viewer having his or left eye at position 12a and his or her right eye at position 12b and looking at the LV area.
  • step 112 generates a 3dView1 and a 3dView2 image, the first being a 2N x 2M pixel array representing a rasterization and interlacing of the LeftPixelArray and CenterPixelArray, and the second being a pixel array of the same format representing a rasterization and interlacing of the CenterPixelArray and RightPixelArray.
  • the 3dView1 pixels are spaced such that when overlaid by the microlens sheet the light from pixels corresponding to the LefPixellmage are diffracted in one direction and the light from pixels corresponding to the CenterPixelArray are diffracted in another direction.
  • the 3dView2 pixels are spaced such that when overlaid by the microlens sheet the light from pixels corresponding to the CenterPixellmage are diffracted in one direction and the light from pixels corresponding to the RightPixelArray are diffracted in another direction.
  • the optical physics of the diffraction are known in the lenticular arts. An example description is found in U.S. Patent No. 6,091 ,482.
  • Step 112 Utilizing mathematical and ray-trace models of the microlens sheet, Step 112 generates 3dView1 to have a pixel spacing, relative to the lenses of the microlens sheet, such that when a user views the microlens sheet from a first viewing direction light from the LeftPixelArray pixels impinges on the viewer's left eye, and light from the CenterPixelArray pixels impinges on the viewer's right eye. Therefore, when the viewer looks at the microlens sheet from the first viewing direction the viewer sees a three-dimensional image of the LV area, as if seen from the VP1 position.
  • step 112 generates 3dView2 such that the pixels are spaced relative to the lenses of the microlens sheet so that when a user views the microlens sheet from a second viewing direction light from the CenterPixelArray pixels impinges on the viewer's left eye, and light from the RightPixelArray pixels impinges on the viewer's right eye. Therefore, when the viewer looks at the microlens sheet from the second viewing direction the viewer sees a three-dimensional image of the LV area, as if seen from the VP2 position.
  • the pixels for 3dView1 , 3dView2, 3dView3 and 3dView4 are printed and overlaid with a microlens sheet comprising a plurality of circular footprint lenses, each having a hemispherical or aspherical cross section.
  • the microlens sheet has a first rotation axis extending in a direction in the plane of the microlenses and a second rotation axis extending in the same plane but perpendicular to the first rotation axis.
  • the spacing of the 3dView1 , 3dView2, 3dView3 and 3dView4 pixels, with respect to the microlens sheet lenses, are such that when the viewer's line sight is at a first rotation about the first axis the viewer sees a three-dimensional image corresponding to 3dView1.
  • the viewer's line of sight is at a second rotational position about the first axis, he or she sees an image corresponding to the 3dView2 image.
  • the user rotates the microlens sheet to a particular position about the second axis the viewer sees a three-dimensional image corresponding to the 3dView3 image.
  • the viewer sees a three-dimensional image corresponding to the 3dView4 image.
  • the pixels can be printed directly on a microlens sheet having a printable back surface.
  • the viewer is provided with a single hard copy which, with respect to a building, shows the building's left face, right face, front face and back face, each view being three-dimensional. Further, referring to FIG. 2, provided that images on which the three-dimensional images are based are obtained from a viewing position such as 12b, the viewer will have the option of rotating the hard copy so that he or she looks directly down at the ground area of interest.
  • the present invention thereby presents the user, by way of a single hard copy, with the two three-dimensional views, one as if seeing the LV area from VP1 and the other as is seeing the LV area from VP2.
  • the user does not have to wear glasses to see any of the 3-dimensional pictures. Instead, with the unaided eye the user can see multiple views of an area or object using only a single hard copy.
  • the prior art provides only a single three-dimensional view which the user must wear special glasses to see. Therefore, with the present invention the user does not have to wear special glasses and does not have to keep track of, and look back and forth between a plurality of pictures when studying an area or item of interest.
  • the hard copies can be any viewable size such as, for example, 8 V ⁇ by 11", paper size "A4", large poster-size sheets, or 3" by 5" cards.
  • FIG. 4 is a microlens hard copy of a simulated ground area imaged in accordance with the above-described invention.
  • the simulated ground area includes a building 2.
  • the FIG. 4 microlens hard copy provides two three dimensional views of the ground area. In one of the two three-dimensional views a missile, labeled as item 4, is seen located against a side wall of the building.
  • the FIG. 1 simulation of an existing art stereoscopic image of the building 2 does not show the missile 4. It does not because the viewing angle would have to a nadir angle, as that obtained from the nadir camera 16.
  • Such a nadir image of the building 2 is included in one of the two three-dimensional views of the building 2 provided by FIG. 4.
  • the user would have to given a hard copy separate from FIG. 1. Therefore, as can be seen, the user would have to wear polarizing glasses to see what is shown by FIG. 1 , and then look at a separate image, of one was provided, to see the missile 4.
  • the present invention solves these problems by providing tne user witn a nard copy showing multiple three dimensional views of the area of interest, and the user can inspect each of these, in three dimensional viewing, by simply rotating the hard copy.
  • the example above was described using three image-taking positions, namely 12a, 12b and 12c, and generating two three-dimensional images as a result.
  • a larger number of image-taking positions may be used.
  • the above- described example used images taken from position 12a and 12b to generate the three dimensional image along the line of sight VP1 , and the image taken from point 12b again, paired with the image taken from position 12c to generate the three dimensional image along view line VP2.
  • the second three dimensional image could have used additional viewing positions, each spaced in the orbit direction beyond points 12b and 12c.
  • the above- described example obtained images by orbiting a single satellite in a planar orbit forming an arc over the imaged area.
  • a 360 degree viewing angle hard copy may be generated by using two satellites, with their respective orbits crossing over one another above the area of interest, at angle preferably close to ninety degrees.
  • the first satellite would obtain three images representing, respectively, the area of interest as seen from a first, second and third position along that satellite's orbit. From these three images two left-right images would be generated, such as the 3dView1 and 3dView2 images described above.
  • the second satellite obtains three images representing, respectively, the area of interest as seen from a first, second and third position along that satellite's orbit. From these three images two additional left-right images are generated, and these may be labeled as 3dView3 and 3dView4.
  • FIG. 2 shows the satellite 10 as an example platform.
  • the present invention further contemplates use of an airborne platform.
  • the airborne platform can be manned or unmanned.
  • the cameras can be gimbal mounted, with automatic tracking and stabilization, as known in the art of airborne surveillance.
  • Such a system may fly the platform in a circular path around a ground area of interest and obtain a plurality of detection images from various points along the path.
  • the image data may be downloaded during flight or stored on board the platform for later retrieval.
  • Two or more pairs of the detection images would be used to generate left-eye and right eye-images, each pair having the parallax information for a three-dimensional view from a viewing angle halfway between the position from which the left eye image was detected and the position from which the right eye image was detected.
  • Registration and alignment on the left eye and right eye images may be performed, using known image processing techniques, before interphasing their respective pixel arrays for printing and viewing through a microlens sheet.
  • the present invention further contemplates use of multiple platforms for obtaining the plurality of detection images of a particular ground area of interest.

Abstract

Multiple left-eye, right-eye images of a ground area are obtained from an aerial or satellite platform. The left-eye, right, eye images are transferred to a digital processor (108) which rasterizes (110) and interleaves (112) the images to an output file for printing. The output file is printed on a hard copy medium and overlaid with a microlens sheet having a plurality of lenses (114). The lenses refract the printed image such that a viewer sees from a first position relative to the hard copy a three-dimensional image of the ground area as seen from a first aerial or satellite viewing position and, from a second position relative to the hard copy, sees a three-dimensional image of the ground area as seen from a second aerial or satellite viewing position.

Description

MULTIPLE ANGLE DISPLAY PRODUCED FROM REMOTE OPTICAL SENSING DEVICES
BACKGROUND OF THE INVENTION
Priority of this application is based on U.S. Provisional Application No. 60/361 ,099 filed March 1 , 2002.
Field of the Invention
[0001] This invention relates generally to the collection and presentation of optical information and, more particularly, to the acquisition, processing, and hard copy presentation of optical information obtained from a plurality of viewing or sensing angles.
Statement of the Problem
[0002] The collection and study of optical information is integral to a variety of activities. These activities include, without limitation, airborne and satellite surveillance and monitoring of areas of interest. The collected information is typically digitized on the platform on which the cameras or other optical sensors are mounted, pre-processed and sent by downlink for further processing. The information is often formatted and printed for visual inspection as well. For example, aerial photographs may be studied by skilled persons, both to identify items missed by computer recognition methods and to obtain further information not conveniently or accurately obtained by computer methods.
[0003] The 2-dimensional picture has shortcomings. One is that it is 2- dimensional, which has aesthetic and related functional drawbacks. More particularly, the viewer does not obtain a sense of depth from a 2-dimensional picture, and this failure may cause a misinterpretation of information that a 3- dimensional view would have provided. [0004] Another shortcoming with existing art airborne and satellite surveillance systems, and the hard copy images they produce, is that the images show the photographed ground area only as seen from one position and viewing angle that it was originally obtained. For example, a photograph of a ten-foot diameter hole obtained by overflying it with a camera looking down at an angle of 45 degrees with respect to a flight path may fail to present an image of the contents of the hole.
[0005] One possible solution to the above example problem is to fly over the item of interest, i.e., the hole, twice, with the camera looking straight down on the second flyover. Other possible solutions include mounting a plurality of cameras on the airborne or satellite platform, or mounting a camera on a steerable gimbal, and thereby obtain a plurality of pictures of a particular ground area, each from a different viewing angle.
[0006] There are problems with the above-identified potential solutions. One is that even if a plurality of pictures is obtained, each of the pictures is two- dimensional. The previously identified problems with two-dimensional images remain. Another problem is that assigning a plurality of pictures to cover the various viewing angles of each ground area of interest requires the user to keep track of, and have the burden of viewing, a plurality of hard copy pictures. This creates further problems. One is the overhead caused by the user having to keep track of multiple pictures. Another is that the pictures may not be aligned or registered with respect to one another. For example, the viewer may have a hard copy of a first picture of a ground area, taken from a first airborne surveillance viewing angle, in which a building of interest is situated in, for example, the upper left corner of the copy. A second picture of the same ground area, taken from a second viewing angle, may show the building in its upper right corner. Still another problem, which relates to the previously identified problem, is that the viewer must change his or her visual focus continually, namely by looking at the pictures taken from one viewing angle and then looking at the pictures taken from another viewing angle. This can be inconvenient. It also increases the probability of human error, as the user must remember how something looked from one viewing angle as he or she shifts attention to another hard copy showing how the item appeared from another viewing angle.
[0007] The existing art does provide a type of stereoscopic visual surveillance method, in which two frames of information are captured via satellite and transmitted to, for example, the National Reconnaissance Office (NRO). Printable images of the left and right frames are then generated, one being polarized orthogonal to the other, arranged above one another and printed. The user wears polarizing glasses, whereby his or her left eye sees the left image and his or her right eye sees the right image. The user thus sees an apparent three-dimensional image.
[0008] However, there are numerous problems with this method. One is that each polarized image pair shows, and is limited to, the ground area of interest as seen from -an oblique viewing angle. More particularly, a typical stereoscopic image is formed by mounting two cameras on a satellite. One camera is ahead of the satellite, at a depression angle toward the earth. The other camera looks behind the satellite, at the same depression angle. Therefore, the left image and the right image are each obtained by looking at the ground area of interest at an oblique viewing angle. For this reason the prior art stereoscopic image does not have a direct look-down angle. This can have significant results. Prior art FIG. 1 shows a simulated example of an image of a building 2 as if viewed in its original polarized format through polarized glasses. For purposes of comparison, FIG. 4 is a microjens hard copy of the same building, displaying two three-dimensional views generated and fixed in hard copy in accordance with a method of the present invention. FIG. 1 does not reveal any object proximal to the building 2. One of the two viewing angles provided by FIG. 4, though, reveals a missile 4. The missile 4 cannot be seen in FIG. 1 because it is close against a side wall of the building 2, and the FIG. 1 stereoscopic image was obtained from oblique viewing angles. [0009] Another shortcoming with the prior art stereoscopic views is that, even with the polarizing glasses, only one three-dimensional view of the ground area of interest can be seen from a single hard copy. Therefore, the previously discussed problems of a single viewing angle image are increased. They are increased because not only is the user required to look at multiple hard copies to see what an area or building looks like from different viewing angles, but the pictures are difficult to identify without wearing the glasses. The glasses cause further problems, namely eye fatigue and equipment overhead.
THE SOLUTION
[ooio] The present invention advances the art and overcomes the problems identified above by placing on a single microlens sheet images of an object or ground area as seen from a remote distance at multiple viewing angles, such that the viewer can move the sheet to see the object from any of the viewing angles. In one embodiment of the invention the original images are taken from the remote distance, and then processed, formatted and fixed to the microlens sheet such that the user will see a plurality of three-dimensional images. The microlens sheet may comprise a plurality of semi-cylindrical or similar cross- section transparent lenses, lying in a plane and extending parallel to one another.
[0011] The original images are obtained by, for example, one or more optical detectors mounted on an aerial or space-borne platform. The optical detectors may detect visible or non-visible frequency bands, or combinations of the same. The optical detectors may be steerable or pointable, either by commands entered local to the platform or by uplink. The optical detectors may include a telephoto lens, with or without a zoom feature.
[0012] The optical detectors obtain images through one or more fields of view, each field of view having a line-of-sight, or bore sight. Selection of the field of view is, for example, by mounting at least one optical detector on a steerable gimbal. [0013] A first embodiment of the invention is a method including flying the platform, or orbiting it, along a platform path over a ground area of interest. A first detection image is detected by the optical detector from a first position on the platform path such that the ground area of interest is in the first image's field of view. A second detection image is detected by the optical detector when the platform is at a second position on the platform path. Likewise, when the platform is at a third position on the platform path a third detection image is detected. The second detection image and the third detection image are detected such that the ground area of interest is the field of each.
[0014] A first digital pixel array representing the first detection image is input to a data processor. Similarly, a second digital pixel array representing the second detection image, and a third digital pixel array representing the third detected image are input to the data processor. The data processor generates an output interphased or interlaced digital pixel array based on the first digital pixel array, the second digital pixel array, and the third pixel array. A visible interphased image is printed on a printable surface of a hard copy sheet, the printed image being based on the output interphased digital pixel array, and a microlens sheet is overlaid onto the printable surface.
[0015] The output interphased digital pixel array is generated, and the visible interphased image is printed such that when the microlens sheet is overlaid the user sees, from a first viewing position, a first three-dimensional image based on the first and second detected images and, from a second viewing position, sees a second three-dimensional image based on the second and third detected images.
[0016] According to this embodiment, the first three-dimensional image shows a surveillance line of sight extending from the ground area of interest to a point on the platform path halfway between the first viewing position and the second viewing position. Likewise, the second three-dimensional image shows a surveillance line of sight extending from the ground area of interest to a point on the platform path halfway between the second viewing position and the third viewing position. Since the first three-dimensional image and the second three- dimensional image each include a direct down view, each provides a view into holes, between buildings and the like.
[0017] In one aspect of the first embodiment, the microlens sheet comprises a plurality of semi-cylindrical or similar cross-section transparent lenses, lying in a plane and extending parallel to one another. A rotation axis lies in the plane and extends in a direction parallel to the lenses. The first orientation and position includes a first rotation of the hard copy sheet about the rotation axis and the second orientation and position includes a second rotation of the hard copy sheet about the rotation axis.
[0018] In a further aspect of the first embodiment the microlens sheet comprises a plurality of lenses, each having a circular or elliptical circumference, and each having a hemispherical or aspherical cross-section. A first rotation axis lies in the plane, and a second rotation axis lies in the plane and extends normal to the first rotation axis. The first orientation and position includes a first rotation of the hard copy sheet about the first rotation axis and the second orientation and position includes a second rotation of the hard copy sheet about the first rotation axis.
[0019] In a still further aspect of the first embodiment, the microlens sheet comprises a plurality of lenses, each having a circular or elliptical circumference, and each having a hemispherical or aspherical cross-section, according to the previously identified aspect. A fourth detection image is detected from a fourth viewing position spaced laterally in a first direction from the first platform path, with the ground area of interest being in the third detection image field. A fourth digital pixel array representing the fourth detection image is input to the data processor. The data processor generates an output interphased or interlaced digital pixel array based on the first digital pixel array, the second digital pixel array, the third pixel array, and the fourth pixel array. The output interphased digital pixel array is further generated, the visible interphased image is printed, and the microlens sheet is overlaid such that when the user rotates the microlens sheet between a first rotation position and a second rotation position about the second rotation axis, the user sees a visual three-dimensional image based, at least in part, on the fourth detected image.
[0020] A further embodiment of the invention is a method including flying the platform along a platform path above a ground area of interest. A first left detection image is detected by the optical detector from a first position on the platform path. A first right detection image is detected by the optical detector from a second position on the platform path. A second left detection image is detected by the optical detector from a third position on the platform path. A second right detection image is detected by the optical detector from a fourth position on the platform path.
[0021] In this further embodiment, a first digital pixel array representing the first detection image is input to a data processor. Similarly, a second digital pixel array representing the second detection image, and a third digital pixel array representing the third detected image are input to the data processor. The data processor generates an output interphased or interlaced digital pixel array based, on the first digital pixel array, the second digital pixel array, and the third pixel array. A visible interphased image is printed on a printable surface of a hard copy sheet, the printed image being based on the output interphased digital pixel array, and a microlens sheet is overlaid onto the printable surface. The output interphased digital pixel array is generated, and the visible interphased image is printed such that when the microlens sheet is overlaid the user sees, from a first viewing position, a first three-dimensional image based on the first and second detected images and, from a second viewing position, sees a second three- dimensional image based on the second and third detected images.
[0022] In a variation of this embodiment, the platform path is any of a curved path, semicircular or circular path, or combination of such paths, about a surveillance axis extending normal to the ground area of interest. In this variation the microlens sheet comprises a plurality of lenses, each having a circular or elliptical circumference, and each having a hemispherical or aspherical cross- section, according to the previously identified aspect. In this variation a fourth detection image is detected and when the platform is at a fourth position on the platform path. The fourth detection image is such that the ground area of interest is in its field.
[0023] In this variation, a fourth digital pixel array representing the fourth detection image is input to the data processor. The data processor generates an output interphased or interlaced digital pixel array based on the first digital pixel array, the second digital pixel array, the third pixel array, and the fourth pixel array. The output interphased digital pixel array is further generated, the visible interphased image is printed, and the microlens sheet is overlaid such that when the user rotates the microlens sheet between a first rotation position and a second rotation position about the second rotation axis, the user sees a visual three-dimensional image based, at least in part, on the fourth detected image.
[0024] An objective of the present invention is to convert remotely acquired images into one or more hard copy motion, or hard copy multidimensional images and/or display devices, to dramatically improve visual intelligence. The hard copy information can then be used, for example, in briefings and distributed to those involved, e.g., pilots, special ops, before carrying out a mission. The micro lens sheet has been specifically designed to map special algorithms that maintain the geometric and physical properties of light waves. This is accomplished by interphasing the light waves in the precise formulas that are designed to fit the specific requirements of the particular discipline being used. The micro lenses are also designed to transmit back to the human visual system the light waves that closely replicate the original scene or object. The micro lens sheet thereby serves as a method of storage and translation of reconnaissance information. The micro lens sheet permits the viewer to see a multidimensional projection or a "360 degree look-around" view including, for example, height of objects including, for example, buildings, mountains, fires, etc. without the aid of further processing and display equipment.
[0025] These and other objects, features and advantages of the present invention will become more apparent to, and better understood by, those skilled in the relevant art from the following more detailed description of the preferred embodiments of the invention taken with reference to the accompanying drawings, in which like features are identified by like reference numerals.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] FIG. 1 is an example picture of a building generated for viewing through a polarization-based three-dimensional viewing system;
[0027 ] FIG. 2 depicts an example surveillance system according to the present invention;
[0028] FIG. 3 shows an example flow chart of a first method according to the present invention; and
[0029] FIG. 4 is a microlens hard copy showing two three-dimensional views of the building shown in FIG. 1 , generated in accordance with the methods of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0030] FIG. 2 shows an example of a surveillance system for carrying out the method of the present invention. The FIG. 2 system comprises a low earth orbit (LEO) satellite 10, shown at position 12a, position 12b, position 12c and position 12c along an orbit path 14. Mounted on the satellite 10 is a forward-looking camera 16, a nadir camera 18, and rearward-looking camera 20. For purposes of example it will be assumed that the cameras 16, 18 and 20 detect images in the visual near-infrared (VNIR) band. This is not a limitation, as the invention can utilize any optical band. The forward camera 16 has a line of sight 16L pointing down from the orbit tangent line TL at an angle THETA. The line of sight 18L of the nadir camera 18 points directly down, and the rearward camera 20 has a line of sight 2OL that points down at angle of minus THETA. The forward camera 16 is shown at position 12a as covering a field of view FV on the ground. As shown, when the satellite 10 is at position 12b the nadir camera 18 has a field of view covering the same area FV. Similarly, when the satellite is at position 12c the field of view of the rearward camera 20 is the area FV.
[0031] The above-described example platform is known in the art and therefore a further detailed description is not necessary. Other known platforms and arrangements of cameras are known in the art and may be used. For example, the cameras 16, 18 and 20 could be for bandwidths other than VNIR, examples being panchromatic and shortwave infrared (SWIR). Additional cameras may be mounted on the platform as well.
[0032] The alignment procedures for the cameras is known in the art and, therefore, description is omitted. The uplink and downlink systems for communications from the ground station 30, and the procedures and systems for controlling and stabilizing the satellite 10 are known in the art, and description for these is also omitted.
[0033] Also shown in FIG. 1 is a ground station 30, a processing station 32, a communication link 34 between the ground station 30 and the processing station 32, and an inkjet printer 36. An uplink 38 carries command and control signals from the ground station 30 to the satellite 10, and a downlink 40 carries camera sensor data, described below, and satellite status information. The ground station 30, the uplink 38 and the downlink 40 are in accordance with the known art of satellite communication and control and, therefore, description is not necessary for understanding or practicing this invention.
[0034] FIG. 3 shows an example flow chart for a first embodiment of the invention, and an example operation will be described in reference to the system illustrated by FIG. 2. The example describes only generating and displaying, according to this invention, a multi-view image of one ground area, labeled FV. In actual operation a multi-view image would likely be generated of a plurality of ground areas LV and, as will be understood by one of ordinary skill in the field of satellite and airborne surveillance, such images can be obtained by repeating the FIG. 3 method.
[0035] Referring to FIG. 3, UPDATE commands are first sent by uplink at block 100. As known in the art, block 100 is for illustrative purposes only, and need not be repeated each time that image sensing data is to be collected. As also understood, the timing, specific content, protocols, and signal characteristics of the UPDATE commands are specific to the particular kind and configuration of the satellite 10 and the ground station 30, and such commands are readily implemented by persons skilled in the art of satellite controls.
[0036] Next, at block 102 SENSOR LEFT data is collected by the ground station 30 from camera 16 when the satellite is position 12a. Then, at block 104, when the orbiting satellite 10 is at position 12b SENSOR CENTER data is collected from camera 18. Next, at block 106, when the orbiting satellite 10 is at position 12c SENSOR RIGHT data is collected from camera 20. It will be understood that blocks 102, 104 and 106 are not necessarily performed as separate data collection steps. Instead, depending on the downlink protocol, the SENSOR LEFT, SENSOR CENTER and SENSOR RIGHT data, i.e., data from cameras 16, 18 and 20, may be multiplexed onto a single data stream and continually collected during a time interval that includes the times that the satellite is at positions 12a, 12b and 12c. Further, the collection is not necessarily performed at the ground station 30, because other ground receiving stations (not shown) may receive the data downlink from the satellite 10. Such arrangements of ground stations and data collection stations are known in the art. Still further, the collection steps 102, 104 and 106 may include retransmission through ground repeaters (not shown), as well as encryption and decryption, and land-line transmissions. These data transfer methods and protocols are known in the art. [0037] After the SENSOR LEFT, SENSOR CENTER, and SENSOR RIGHT data is collected the method goes to step 108^ which formats the data, sends it over the link 34 and inputs it to a data processor, shown as item 32 in FIG. 2. The link 34 may be the Internet and, accordingly, the formatting, transfer and input may further include data and data network transmissions such as, for example, a File Transfer Protocol (FTP) transfer. Further, the link 34 is shown for purposes of example only. The data processor 32 may be local to the ground station 30, or to any other ground receiving station.
[0038] The data processor 32 can be any of a large variety of standard commercially available general purpose programmable digital computers (not shown) having, for example, a standard protocol digital input port, a microprocessor, operating system storage, operating system software stored in same, application program storage, data storage, a standard protocol digital output port and, preferably, a user interface, and a video screen. An example computer is a Dell® model Optiplex® GX 150 having a 1 GHz Intel® Pentium® III or Celeron® microprocessor, 528 MByte RAM, a 60 GByte hard drive, a 19 inch conventional cathode ray tube (CRT) video display, and a standard keyboard and mouse for user entry of data and commands, running under Microsoft Windows 2000® or Windows XP® operating system.
[0039] After inputting to the data processor 32, step 110 reformats the SENSOR LEFT, SENSOR CENTER and SENSOR RIGHT data into a three M XN pixel arrays, which are labeled for reference purposes as LeftPixelArray, CenterPixelArray and RightPixelArray. The step 110 reformatting is based on a predetermined, user-input MicroLensData which characterizes physical parameters of the microlens sheet on which the final image set will be printed. Step 110 may also be based on a PrinterResData characterizing performance parameters of the printer 36, particularly the printer's resolution in, for example, dots per inch (DPI). For example, if the microlens sheet is a lenticular sheet (not shown) having a plurality of semi-cylindrical or equivalent lenses the MicroLensData will characterize the spacing of the lenses in lenses per inch (LPI), as well as the length and width of the final hard copy picture. Step 110 uses this LPI data, and the PrinterResData, to convert the SENSOR LEFT, SENSOR RIGHT and SENSOR CENTER data into N x M pixel arrays LeftPixelArray, CenterPixelArray and RightPixelArray, with N and M selected to place an optimal number of printed pixels under each lenticule.
[0040] As known to persons skilled in the image collection arts, the pixel resolution of the nadir camera 18 may differ from the pixel resolution of the forward and rearward looking cameras 16 and 20. The pixel resolution may differ in terms of the number of pixels generated by the camera, and by the ground area represented by each pixel. One reason for the latter is that the image field of the forward and rearward cameras 16 and 20, in terms of the total ground area, is typically larger than that covered by the nadir camera 18. For example, each pixel generated by the nadir camera 18 may represent 5 meters by 5 meters of ground area, while each pixel generated by the cameras 16 and 20 may represent, for example, 8 meters by 8 meters. These pixel values are only for purposes of example, and assume that no filtering, or resolution altering processing has been performed. To equalize the ground area represented by each pixel of the nadir camera and the ground area represented by each pixel of the forward and rearward cameras 16 and 20, image mapping algorithms may be used. Such algorithms are well-known in the art.
[0041] The images represented by LeftPixelArray, CenterPixelArray and RightPixelArray are of the same ground area LV, viewed from the three positions shown in FIG. 2 as 12a, 12b, and- 12c. The difference between the LeftPixelArray and CenterPixelArray is a parallax equivalent to a viewer having his or left eye at position 12a and his or her right eye at position 12b and looking at the LV area. Therefore, if the LeftPixelArray image is presented to the viewer's left eye, and the CenterPixelArray is presented to the viewer's right eye, he or she will see a three-dimensional image of the LV area, as if viewed from a center line halfway between 12a and 12b, which is labeled as VP1 on FIG. 2. Likewise, if the CenterPixelArray is presented to the viewer's left eye, and the RightPixelArray is presented to the viewer's right eye then he or she will see a three-dimensional image of the LV area, as if viewed from a center line halfway between 12b and 12c points, which is labeled VP2 on FIG. 2.
[0042] To accomplish this, step 112 generates a 3dView1 and a 3dView2 image, the first being a 2N x 2M pixel array representing a rasterization and interlacing of the LeftPixelArray and CenterPixelArray, and the second being a pixel array of the same format representing a rasterization and interlacing of the CenterPixelArray and RightPixelArray. The 3dView1 pixels are spaced such that when overlaid by the microlens sheet the light from pixels corresponding to the LefPixellmage are diffracted in one direction and the light from pixels corresponding to the CenterPixelArray are diffracted in another direction. Similarly, the 3dView2 pixels are spaced such that when overlaid by the microlens sheet the light from pixels corresponding to the CenterPixellmage are diffracted in one direction and the light from pixels corresponding to the RightPixelArray are diffracted in another direction. The optical physics of the diffraction are known in the lenticular arts. An example description is found in U.S. Patent No. 6,091 ,482.
[0043] Utilizing mathematical and ray-trace models of the microlens sheet, Step 112 generates 3dView1 to have a pixel spacing, relative to the lenses of the microlens sheet, such that when a user views the microlens sheet from a first viewing direction light from the LeftPixelArray pixels impinges on the viewer's left eye, and light from the CenterPixelArray pixels impinges on the viewer's right eye. Therefore, when the viewer looks at the microlens sheet from the first viewing direction the viewer sees a three-dimensional image of the LV area, as if seen from the VP1 position. Likewise, step 112 generates 3dView2 such that the pixels are spaced relative to the lenses of the microlens sheet so that when a user views the microlens sheet from a second viewing direction light from the CenterPixelArray pixels impinges on the viewer's left eye, and light from the RightPixelArray pixels impinges on the viewer's right eye. Therefore, when the viewer looks at the microlens sheet from the second viewing direction the viewer sees a three-dimensional image of the LV area, as if seen from the VP2 position.
[0044] Regarding the mathematical and/or ray-trace models of the microlens sheet, the generation and utilization of such models, including models of lenticular and other multiple lens sheets, is well known in the imaging arts and, therefore, a description of these need not be presented here.
[0045] At step 114, the pixels for 3dView1 , 3dView2, 3dView3 and 3dView4 are printed and overlaid with a microlens sheet comprising a plurality of circular footprint lenses, each having a hemispherical or aspherical cross section. For purposes of reference, the microlens sheet has a first rotation axis extending in a direction in the plane of the microlenses and a second rotation axis extending in the same plane but perpendicular to the first rotation axis. The spacing of the 3dView1 , 3dView2, 3dView3 and 3dView4 pixels, with respect to the microlens sheet lenses, are such that when the viewer's line sight is at a first rotation about the first axis the viewer sees a three-dimensional image corresponding to 3dView1. When the viewer's line of sight is at a second rotational position about the first axis, he or she sees an image corresponding to the 3dView2 image. When the user rotates the microlens sheet to a particular position about the second axis, the viewer sees a three-dimensional image corresponding to the 3dView3 image. Likewise, when the user rotates the microlens sheet to another particular position about the second axis, the viewer sees a three-dimensional image corresponding to the 3dView4 image.
[0046] As an alternative to printing the 3dView1 , 3dView2, 3dView3 and 3dView4 pixels on a printable material and then overlaying that material with a microlens sheet, the pixels can be printed directly on a microlens sheet having a printable back surface.
[0047] Using the above-described embodiment, the viewer is provided with a single hard copy which, with respect to a building, shows the building's left face, right face, front face and back face, each view being three-dimensional. Further, referring to FIG. 2, provided that images on which the three-dimensional images are based are obtained from a viewing position such as 12b, the viewer will have the option of rotating the hard copy so that he or she looks directly down at the ground area of interest.
[0048] The present invention thereby presents the user, by way of a single hard copy, with the two three-dimensional views, one as if seeing the LV area from VP1 and the other as is seeing the LV area from VP2. Unlike the prior art, the user does not have to wear glasses to see any of the 3-dimensional pictures. Instead, with the unaided eye the user can see multiple views of an area or object using only a single hard copy. The prior art, in contrast, provides only a single three-dimensional view which the user must wear special glasses to see. Therefore, with the present invention the user does not have to wear special glasses and does not have to keep track of, and look back and forth between a plurality of pictures when studying an area or item of interest. Further, the hard copies can be any viewable size such as, for example, 8 V≥ by 11", paper size "A4", large poster-size sheets, or 3" by 5" cards.
[0049] FIG. 4 is a microlens hard copy of a simulated ground area imaged in accordance with the above-described invention. The simulated ground area includes a building 2. The FIG. 4 microlens hard copy provides two three dimensional views of the ground area. In one of the two three-dimensional views a missile, labeled as item 4, is seen located against a side wall of the building. The FIG. 1 simulation of an existing art stereoscopic image of the building 2 does not show the missile 4. It does not because the viewing angle would have to a nadir angle, as that obtained from the nadir camera 16. Such a nadir image of the building 2 is included in one of the two three-dimensional views of the building 2 provided by FIG. 4. For a user of the prior art to be able to detect the missile 4 the user would have to given a hard copy separate from FIG. 1. Therefore, as can be seen, the user would have to wear polarizing glasses to see what is shown by FIG. 1 , and then look at a separate image, of one was provided, to see the missile 4. The present invention solves these problems by providing tne user witn a nard copy showing multiple three dimensional views of the area of interest, and the user can inspect each of these, in three dimensional viewing, by simply rotating the hard copy.
[0050] The example above was described using three image-taking positions, namely 12a, 12b and 12c, and generating two three-dimensional images as a result. A larger number of image-taking positions may be used. Also, the above- described example used images taken from position 12a and 12b to generate the three dimensional image along the line of sight VP1 , and the image taken from point 12b again, paired with the image taken from position 12c to generate the three dimensional image along view line VP2. In the alternative, the second three dimensional image could have used additional viewing positions, each spaced in the orbit direction beyond points 12b and 12c. Further, the above- described example obtained images by orbiting a single satellite in a planar orbit forming an arc over the imaged area. A 360 degree viewing angle hard copy may be generated by using two satellites, with their respective orbits crossing over one another above the area of interest, at angle preferably close to ninety degrees. The first satellite would obtain three images representing, respectively, the area of interest as seen from a first, second and third position along that satellite's orbit. From these three images two left-right images would be generated, such as the 3dView1 and 3dView2 images described above. The second satellite obtains three images representing, respectively, the area of interest as seen from a first, second and third position along that satellite's orbit. From these three images two additional left-right images are generated, and these may be labeled as 3dView3 and 3dView4.
[0051] FIG. 2 shows the satellite 10 as an example platform. The present invention further contemplates use of an airborne platform. The airborne platform can be manned or unmanned. The cameras can be gimbal mounted, with automatic tracking and stabilization, as known in the art of airborne surveillance. Such a system may fly the platform in a circular path around a ground area of interest and obtain a plurality of detection images from various points along the path. The image data may be downloaded during flight or stored on board the platform for later retrieval. Two or more pairs of the detection images would be used to generate left-eye and right eye-images, each pair having the parallax information for a three-dimensional view from a viewing angle halfway between the position from which the left eye image was detected and the position from which the right eye image was detected. Registration and alignment on the left eye and right eye images may be performed, using known image processing techniques, before interphasing their respective pixel arrays for printing and viewing through a microlens sheet.
[0052] The present invention further contemplates use of multiple platforms for obtaining the plurality of detection images of a particular ground area of interest.
[0053] Those skilled in the art understand that the preferred embodiments described above may be modified, without departing from the true scope and spirit of the invention, and that the particular embodiments shown in the drawings and described within this specification are for purposes of example and should not be construed to limit the invention as set forth in the claims below.

Claims

We Claim:
1. A method for high altitude imagery, comprising: detecting a first image of the ground area from a first point at an altitude; detecting a second image of the ground area from a second point at an altitude; generating a first image pixel array corresponding to the first image and a second image pixel array corresponding to the second image; storing a microlens data representing values of physical parameters of a lenticular sheet; interleaving the first image pixel array and the second image pixel array to form an output interleaved pixel array, the first output interleaved pixel array based in part on the microlens data; printing the output interleaved pixel array on a printable medium; and viewing the printed output interleaved pixel array through the microlens sheet.
2. A method according to claim 1 , further comprising: detecting a third second image of the ground area from a third point at an altitude; and generating a third image pixel array corresponding to the third image, wherein the interleaving interleaves the first image pixel array, the second image pixel array and the third image pixel array to form the output interleaved pixel array, and wherein the interleaving and printing is such that viewing the printed output interleaved pixel array from a first viewing position with respect to the microlens sheet a three-dimensional image based on the first detected image and the second detected image is seen and, from a second viewing position with respect to the microlens sheet, a three-dimensional image based, at least in part, on the third detected image is seen.
1 3. A method according to claim 1 wherein said detecting a first image
2 includes
3 providing a platform supporting an optical detector for obtaining a two-
4 dimensional image of a ground area;
5 moving the platform at an altitude over the ground area; and
6 obtaining said first image from said optical detector.
i 4. A method according to claim 3 wherein said platform includes a satellite.
1 5. A method according to claim 4, wherein
2 said satellite has an orbit, and
3 said optical detector includes a first camera and a second camera, the first
4 camera having a field of view ahead of the satellite with respect to the orbit
5 direction and the second camera having a field of view behind the satellite with
6 respect to the orbit direction, i
1 6. A method according to claim 2 wherein said detecting a first image
2 includes
3 providing a platform supporting an optical detector for obtaining a two-
4 dimensional image of a ground area;
5 moving the platform at an altitude over the ground area; and
6 obtaining said first image from said optical detector.
i
7. A method according to claim 6 wherein said platform includes a satellite.
1 8. A method according to claim 7, wherein
2 said satellite has an orbit, and
3 said optical detector includes a first camera and a second camera, the first camera having a field of view ahead of the satellite with respect to the orbit
5 direction and the second camera having a field of view behind the satellite with
6 respect to the orbit direction.
9. A method according to claim 8, wherein the first field of view and second field of view move along the surface of the earth as the satellite orbits, and said first point is where the first field of view aligns with the ground area of interest and said second point is where the second field of view aligns with the ground area of interest.
10. A method according to claim 6, wherein said optical detector further includes a third camera, the third camera having a field of view that moves along the surface of the earth at a position, relative to a projection of the orbit onto the surface of the earth, between the first field of view and the second field of view, and said third point is where the third field of view aligns with the ground area of interest.
11. A method according to claim 1 , wherein said microlens sheet includes a plurality of closed periphery footprint lenses arranged in a plane, with a first reference axis and a second reference axis, substantially perpendicular to the first reference axis, extending in the plane.
12. A method according to claim 11 , further comprising: detecting a third second image of the ground area from a third point at an altitude; and generating a third image pixel array corresponding to the third image, wherein the interleaving interleaves the first image pixel array, the second image pixel array and the third image pixel array to form the output interleaved pixel array, the interleaving and printing is such that viewing the printed output interleaved pixel array from a first viewing position relative to the first reference axis a first three-dimensional image based on the first detected image and the second detected image is seen and, from a second viewing position with respect to the first reference axis, a second three-dimensional image, based, at least in part, on the third detected image, and microlens sheet, a three-dimensional image based, at least in part, on the third detected image is seen.
13. A method according to claim 1 1 , further comprising: detecting a third second image of the ground area from a third point at an altitude; detecting a fourth image of the ground area from a fourth point at an altitude; generating a third image pixel array corresponding to the third image; and generating a fourth image pixel array corresponding to the fourth image, wherein the interleaving interleaves the first image pixel array, the second image pixel array and the third image pixel array to form the output interleaved pixel array, the interleaving and printing is such that viewing the printed output interleaved pixel array from a first viewing position relative to the first reference axis a first three-dimensional image based on the first detected image and the second detected image is seen and, from a second viewing position with respect to the first reference axis, a second three-dimensional image, based, at least in part, on the third detected image is seen, and when viewing the microlens sheet at a first rotational position with respect to the second reference axis a third three-dimensional image based, at least in part, on the fourth detected image is seen.
PCT/US2003/008951 2002-03-22 2003-03-24 Multiple angle display produced from remote optical sensing devices WO2006115470A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003224751A AU2003224751A1 (en) 2002-03-22 2003-03-24 Multiple angle display produced from remote optical sensing devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/102,887 2002-03-22
US10/102,887 US6894809B2 (en) 2002-03-01 2002-03-22 Multiple angle display produced from remote optical sensing devices

Publications (1)

Publication Number Publication Date
WO2006115470A1 true WO2006115470A1 (en) 2006-11-02

Family

ID=37215022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/008951 WO2006115470A1 (en) 2002-03-22 2003-03-24 Multiple angle display produced from remote optical sensing devices

Country Status (3)

Country Link
US (1) US6894809B2 (en)
AU (1) AU2003224751A1 (en)
WO (1) WO2006115470A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11917119B2 (en) 2020-01-09 2024-02-27 Jerry Nims 2D image capture system and display of 3D digital image

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7073158B2 (en) * 2002-05-17 2006-07-04 Pixel Velocity, Inc. Automated system for designing and developing field programmable gate arrays
IL149934A (en) * 2002-05-30 2007-05-15 Rafael Advanced Defense Sys Airborne reconnaissance system
US7424133B2 (en) * 2002-11-08 2008-09-09 Pictometry International Corporation Method and apparatus for capturing, geolocating and measuring oblique images
US7308342B2 (en) * 2004-01-23 2007-12-11 Rafael Armament Development Authority Ltd. Airborne reconnaissance system
US20070188610A1 (en) * 2006-02-13 2007-08-16 The Boeing Company Synoptic broad-area remote-sensing via multiple telescopes
US20080036864A1 (en) * 2006-08-09 2008-02-14 Mccubbrey David System and method for capturing and transmitting image data streams
US7873238B2 (en) 2006-08-30 2011-01-18 Pictometry International Corporation Mosaic oblique images and methods of making and using same
US20080151049A1 (en) * 2006-12-14 2008-06-26 Mccubbrey David L Gaming surveillance system and method of extracting metadata from multiple synchronized cameras
US8587661B2 (en) * 2007-02-21 2013-11-19 Pixel Velocity, Inc. Scalable system for wide area surveillance
US9262818B2 (en) 2007-05-01 2016-02-16 Pictometry International Corp. System for detecting image abnormalities
US20090086023A1 (en) * 2007-07-18 2009-04-02 Mccubbrey David L Sensor system including a configuration of the sensor as a virtual sensor device
ITTO20070620A1 (en) * 2007-08-31 2009-03-01 Giancarlo Capaccio SYSTEM AND METHOD FOR PRESENTING VISUAL DATA DETACHED IN MULTI-SPECTRAL IMAGES, MERGER, AND THREE SPACE DIMENSIONS.
US7991226B2 (en) 2007-10-12 2011-08-02 Pictometry International Corporation System and process for color-balancing a series of oblique images
US8531472B2 (en) 2007-12-03 2013-09-10 Pictometry International Corp. Systems and methods for rapid three-dimensional modeling with real façade texture
US8588547B2 (en) 2008-08-05 2013-11-19 Pictometry International Corp. Cut-line steering methods for forming a mosaic image of a geographical area
US8401222B2 (en) 2009-05-22 2013-03-19 Pictometry International Corp. System and process for roof measurement using aerial imagery
US8555406B2 (en) 2009-10-06 2013-10-08 At&T Intellectual Property I, L.P. Remote viewing of multimedia content
US9330494B2 (en) 2009-10-26 2016-05-03 Pictometry International Corp. Method for the automatic material classification and texture simulation for 3D models
US20110115909A1 (en) * 2009-11-13 2011-05-19 Sternberg Stanley R Method for tracking an object through an environment across multiple cameras
US8477190B2 (en) 2010-07-07 2013-07-02 Pictometry International Corp. Real-time moving platform management system
US8823732B2 (en) 2010-12-17 2014-09-02 Pictometry International Corp. Systems and methods for processing images with edge detection and snap-to feature
CA2835290C (en) 2011-06-10 2020-09-08 Pictometry International Corp. System and method for forming a video stream containing gis data in real-time
US9183538B2 (en) 2012-03-19 2015-11-10 Pictometry International Corp. Method and system for quick square roof reporting
WO2014039476A1 (en) 2012-09-05 2014-03-13 Lumenco, Llc Pixel mapping, arranging, and imaging for round and square-based micro lens arrays to achieve full volume 3d and multi-directional motion
US9244272B2 (en) 2013-03-12 2016-01-26 Pictometry International Corp. Lidar system producing multiple scan paths and method of making and using same
US9275080B2 (en) 2013-03-15 2016-03-01 Pictometry International Corp. System and method for early access to captured images
AU2015204838B2 (en) 2014-01-10 2020-01-02 Pictometry International Corp. Unmanned aircraft structure evaluation system and method
US9292913B2 (en) 2014-01-31 2016-03-22 Pictometry International Corp. Augmented three dimensional point collection of vertical structures
WO2015120188A1 (en) 2014-02-08 2015-08-13 Pictometry International Corp. Method and system for displaying room interiors on a floor plan
US20150237327A1 (en) * 2015-04-30 2015-08-20 3-D XRay Technologies, L.L.C. Process for creating a three dimensional x-ray image using a single x-ray emitter
US10402676B2 (en) 2016-02-15 2019-09-03 Pictometry International Corp. Automated system and methodology for feature extraction
US10671648B2 (en) 2016-02-22 2020-06-02 Eagle View Technologies, Inc. Integrated centralized property database systems and methods

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5600402A (en) * 1995-05-04 1997-02-04 Kainen; Daniel B. Method and apparatus for producing three-dimensional graphic images using a lenticular sheet

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5905593A (en) * 1995-11-16 1999-05-18 3-D Image Technology Method and apparatus of producing 3D video by correcting the effects of video monitor on lenticular layer
US6133945A (en) * 1994-08-19 2000-10-17 Leica Microsystems Ag Method and device for showing stereoscopic video images on a display

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5600402A (en) * 1995-05-04 1997-02-04 Kainen; Daniel B. Method and apparatus for producing three-dimensional graphic images using a lenticular sheet

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11917119B2 (en) 2020-01-09 2024-02-27 Jerry Nims 2D image capture system and display of 3D digital image

Also Published As

Publication number Publication date
AU2003224751A1 (en) 2006-11-16
US6894809B2 (en) 2005-05-17
US20030164962A1 (en) 2003-09-04

Similar Documents

Publication Publication Date Title
US6894809B2 (en) Multiple angle display produced from remote optical sensing devices
US9294755B2 (en) Correcting frame-to-frame image changes due to motion for three dimensional (3-D) persistent observations
US9706119B2 (en) Digital 3D/360 degree camera system
US5124915A (en) Computer-aided data collection system for assisting in analyzing critical situations
US20180184073A1 (en) Systems and Methods For Recording Stereo Pairs From Independent Camera Platforms
EP2302941B1 (en) System and method for creating stereoscopic 3D video
CA2737451C (en) Method and apparatus for displaying stereographic images of a region
US9536320B1 (en) Multiple coordinated detectors for examination and ranging
EP2659680B1 (en) Method and apparatus for providing mono-vision in multi-view system
US20180184063A1 (en) Systems and Methods For Assembling Time Lapse Movies From Consecutive Scene Sweeps
US6781707B2 (en) Multi-spectral display
KR102126159B1 (en) Scanning panoramic camera and scanning stereoscopic panoramic camera
CA2429176A1 (en) Combined colour 2d/3d imaging
CN111541887A (en) Naked eye 3D visual camouflage system
US20180174270A1 (en) Systems and Methods For Mapping Object Sizes and Positions Onto A Cylindrical Panorama Using A Pivoting Stereoscopic Camera
CN101523436A (en) Method and filter for recovery of disparities in a video stream
Schultz et al. System for real-time generation of georeferenced terrain models
WO2004070430A2 (en) Multiple angle display produced from remote optical sensing devices
Buchroithner et al. Three in one: Multiscale hardcopy depiction of the Mars surface in true-3D
Ondrejka et al. Note on the stereo interpretation of nimbus ii apt photography
WO2021149484A1 (en) Image generation device, image generation method, and program
Buchroithner et al. True 3d visualization of mountainous terrain by means of lenticular foil technology
Mattson et al. Exploring the Moon with LROC‐NAC Stereo Anaglyphs
Miura et al. Geometric analysis on stereoscopic images captured by single high-definition television camera on lunar orbiter Kaguya (SELENE)
McLAURIN et al. Advanced alternating-frame technology (VISIDEP) and three-dimensional remote sensing

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application