« PrécédentContinuer »
ACCURATE TARGETING FROM IMPRECISE
CROSS-REFERENCE TO RELATED
STATEMENT REGARDING FEDERALLY
SPONSORED RESEARCH OR DEVELOPMENT
REFERENCE TO AN APPENDIX Not Applicable.
1. Field of Technology
The field of technology relates generally to navigating, and more particularly to locating spotter position and targeting objects.
2. Description of Related Art
The physical world comprises physical objects and locations. There are many known devices and systems associated with navigating the physical world. Most relate to use of one or more, substantially constant, signal emitters, or "beacons," where the signal is something associated with a particular physical location of the beacon.
With the proliferation of computing devices and the rapid growth of the Internet and use of the World Wide Web ("www"), physical objects-of-interest such as commercial entities, e.g., a bookstore, and even particular things, e.g., particular books on the store shelves, are related to one or more virtual representations—commonly referred to as "web pages." The virtual representations generally use text, audio, still or video images, and the like to describe or illustrate the related object-of-interest and possibly even offer associated commercial services. Each web page can represent one or more objects. These computerized constructs, often referred to as "cyberspace," a "virtual world," or hereinafter as simply "the web," have become an important economic reality, providing individuals continually improving personal and business activities convenience.
Bridges between the physical world and virtual world are rapidly appearing. Particularly, portable wireless communication devices and applications provide great convenience and wide use potential. One problem associated with such mobile devices is that there is an inherent imprecision of positioning systems. For example, most personal geophysical positioning system ("GPS") devices, relying on triangulation calculations based on acquisition of signals from three or more Earth-orbiting satellites, have an accuracy, or potential position variance, to only to about±five meters.
As links to the web proliferate, locating and acquiring a particular link associated with a specific object-of-interest without use of a full web browser search routine will be user-preferred. Discrimination between links associated with physically proximate objects becomes more difficult. For example, in an art museum, acquiring an immediate link to a web page for one specific object d'art in a room full of many becomes more complicated if each has its own link.
In a basic aspect, there is provided a method and apparatus for targeting physical world objects from a substan
tially portable computing device. A mechanism is provided for automatically adjusting the device field-of-vision, including compensating for lack of knowledge of the present precise location, such that all objects within the range of the
5 device are accurately targeted. An exemplary embodiment of acquiring a link to a web beacon is described.
The foregoing summary is not intended to be an inclusive list of all the aspects, objects, advantages and features of described embodiments nor should any limitation on the
1° scope of the invention be implied therefrom. This Summary is provided in accordance with the mandate of 37 C.F.R. 1.73 and M.P.E.P. 608.01(d) merely to apprise the public, and more especially those interested in the particular art to which the invention relates, of the nature of the invention in
15 order to be of assistance in aiding ready understanding of the patent in future searches.
BRIEF DESCRIPTION OF THE DRAWINGS
2Q FIG. 1 is a schematic diagram illustrating a mobile device relative to an object-of-interest.
FIG. 2 is a schematic block diagram of an exemplary mobile device as shown in FIG. 1.
FIG. 3 is a schematic diagram showing an estimated 25 mobile device (as in FIG. 2) location with respect to a plurality of objects-of-interest (as in FIG. 1).
FIG. 4 is a schematic diagram illustrating a location incertitude construct associated with a mobile device as shown in FIGS. 1 and 3.
FIG. 5 is a schematic diagram illustrating an extended device vision construct.
FIG. 6 is a diagram associated with calculating the extended device vision construct in accordance with FIG. 5. 35 FIG. 7 is a flow chart associated with calculating the extended device vision construct in accordance with FIG. 5.
FIG. 8 is an illustration of the trigonometry associated with computation of extended device vision associated with the flow chart of FIG. 7. 40 Like reference designations represent like features throughout the drawings. The drawings referred to in this specification should be understood as not being drawn to scale except if specifically annotated.
45 DETAILED DESCRIPTION
FIG. 1 is an illustration where an object-of-interest 103 is within range of targeting by a mobile device 101. In order to make the description of an exemplary embodiment of the
50 present invention easier to understand, assume the device 101 is a portable telephone with digital view screen and having known manner Internet capability. Assume also that the object-of-interest 103 is mapped in a database of many objects; e.g., a database map of physical buildings in a given
55 city where each mapped building has an associated web page. Also assume there is a possibility that there are one or more web links associated with the object-of-interest 103, e.g., a department store within a line-of-sight of the user of the device 101. It should be recognized that a wide variety
go of applications and implementations can be envisioned; the use of specific exemplary embodiments described herein are for convenience; no limitation on the scope of the invention is intended nor should any be implied therefrom.
Looking also to FIG. 2, this mobile device 101 is
65 equipped with known manner telecommunications hardware and software/firmware, including a web compatible program, here shown as a labeled box, "telecom" 201. The
mobile device 101 is also provided with self-locating subsystems, e.g, a magnetic compass 203 for determining current heading direction, and a subsystem 205, such as GPS, for determining current location of the device 101 within a predetermined variance according to the specific 5 implementation of the GPS. In addition to the wireless telecom 201 subsystem, the device 101 has a known manner receiver 207 for obtaining "beacon" signals wherein it can receive wireless signals (RF, infrared, or the like) from an object 103. Note that this signal can be any type of broadcast 10 identifying information associated with the object; for purposes of continuing the exemplary embodiment in an application associated with use of the web, let the beacon signal be indicative of a Uniform Resource Locator ("URL") of a web page associated with the object-of-interest 103, e.g., a 15 department store product directory. Note however, that the "broadcast" can in fact be simply reflected light which can be acquired through a simple optical lens. For example, if the object-of-interest 103 is relatively large, such as a department store, and the user is interested in knowing 20 whether the store carries a product, getting to the associated web URL could be as simple as aiming an optical lens for an attempt at acquisition of a related, "mapped," beacon; specific implementations can be tailored to connectivity needs. That is to say, the acquisition of a link to obtain data 25 from the object-of-interest 103—in this example the URL associated with the store—does not rely upon connecting to the link itself as a prerequisite.
The device's receiver 207 will have a given "aperture"— a°(d)—that is defined as a given angular range through 30 which the mobile device 101 may be rotated while continuing acquisition at a distance "d" (rather than a global value the aperture will vary depending on "d"). For purpose of describing the invention, the term "vision angle" ("v°") is defined as the aperture angle of the mobile device if the 35 self-positioning accuracy was perfect, and the term "vision area" ("Vp/') is defined as the area where the device could maintain signal acquisition from the position "p," if the self-positioning accuracy was perfect.
A subsystem 209 for performing calculations, e.g., a 40 known manner programmable microprocessor or application specific integrated circuit ("ASIC") is also provided, for performing calculations (which will be described in detail hereinafter) with respect to accuracy of precise location of the mobile device 101. For convenience, this subsystem 209 45 is referred to hereinafter as a "position recalculated 209."
FIG. 3 is an illustration showing that the mobile device 101 has an estimated location—"pe"—from the data of the GPS 205 (FIG. 2) subsystem. Two objects 301, 302 are outside the constant vision angle "v°" of the mobile device 50 101. However, since the accuracy of the GPS 205 has a given variance, it is demonstrated by FIG. 4 that there is associated with the mobile device a circle-of-incertitude 'T" (approximated for purpose of this description as a circle although in actuality a hemisphere having a height dimen- 55 sion and where for this embodiment the altitude of the device is ignored.) The mobile device 101 may actually be anywhere in the circle such as a perimeter location "p,-." The vision area 401 for the estimated device location pe is shown as V(pe) and the vision area 403 for the possible actual 60 location p,- is shown as V(pJ). In other words, for this illustration, when the mobile device 101e thinks it is at location pe and it is in reality the device 101,- at location p,-, the user could be pointing the mobile device 101 at an object and not acquiring the associated broadcast signal due to the 65 variance of the positioning subsystem GPS 205. Conversely, if the mobile device 101 is actually at location p,-, it will
acquire the object shown as object 302 when it was not the specific object-of-interest; in other words, the user, thinking from the GPS 205 data that they are at location pe when actually at location p,-, may be confused if a URL for Object 2 suddenly appears on the mobile device's screen due to the inaccuracy of actual location determination, even though the object-of-interest 405 which is outside vision area 403 between Object 1 and Object 2 is pointed at by the user.
The position recalculated 209 introduces a factor of "extended device vision," represented by the symbol "E." The extended device vision "E" is in effect a union of all vision areas from all the possible user locations in the circle-of-incertitude 'T" for the mobile device 101 at a current estimated location, where the diameter of the circleof-incertitude is defined by the predetermined variance inherent to the GPS subsystem 205. Note that this "circle" may take other shapes, e.g., ovoids, and the like. This union of all vision areas, the extended device vision can be defined by the equation:
where "e[" represents objects to detect.
FIG. 5 is a representation of the introduced extended device vision for the same factors shown in FIG. 4, illustrating that Object 2 (302)—now assumed to be the actual object-of-interest—is detected even though the GPS 205 (FIG. 2) subsystem believes the mobile device 101e is at position pe when it can be actually elsewhere, "any p,-," within the circle-of-incertitude 'T." In order to set up acquisition of an object-of-interest, e.g., obtaining a URL link from a beacon at Object 2 (302), the position recalculated 209 will detect all objects in "E" wherein the aperture of the device for an object depends on the object distance and is modulated by the GPS' position accuracy. While in this embodiment, the extended field of vision is shown as a parabolic construct, it will be recognized by those skilled in the art that other polygon constructs may be employed. Also, again, it should be noted that with added accounting for the height variance factor, an implementation may have a "vision volume" which is a three dimensional shape, e.g., a cone. Such an implementation can be made in order to have, to continue the previous example, a link to a beacon associated with only a particular floor of a multi-story department store.
Using FIG. 6 to illustrate, the calculation of the extended device vision is describing an extended device aperture "a.°obj" covering all the extremes of possible position "P0" of an object in the extended vision at a distance "d" from the mobile device 101e. The maximum lines of sight describing the vision area for each possible position are now on the two tangents 601, 602 to the location circle-of-incertitude 603 and which have an angle to the device heading 604 equal to a vision angle 605 equal to the known vision angle of the device "v°." This can be described by the equation:
where X°=sin_1 (a/d).
In other words, this calculation as related to FIGS. 4 and 5 puts the mobile device 101 at every possible position within the circle-of-incertitude 603 Note that, extension of the tangent lines 601, 602 (FIG. 6) behind the mobile device 101 to an intersection thereof creates a virtual targeting, or "spotter," location with respect to the actual perceived location of the mobile device in the circle-of-incertitude. However, actual position of the mobile device 101 to be ascertained can only be within the circle-of-incertitude.
Therefore, one must consider the case where the circle-ofincertitude is very large; with the range of the object broadcast beacons being limited, using that virtual targeting focus location of the tangent lines in the calculations could move one mathematically out of range of an object actually 5 within range. Thus, while this "virtual stepping backward" to the intersection of the tangent lines constructive spotter location can be employed for calculations, it is not a preferred embodiment.
FIG. 7 is a flow chart also illustration the repositioning 10 calculation in accordance with the diagram of FIG. 8, illustrating adjustment of the aperture for each object. To find out which objects are in the extended vision of the device 101, a "Map" of the objects (a given database providing location, beacon frequency, and the like informa- 15 tion with respect to all objects in the current Map), the approximate location of the device, Lrfevlce, the "Accuracy"
step 701. A register, or other temporary data storage, "So6-," for detected objects is emptied, step 703. For each Object,, in the Map, the distance "d" from the device to each is determined, step 705. It is determined if the object is within 25 the circle-of-incertitude, step 707. If so, step 707, YES-path, the object is entered into the current S^ - register, step 709. A check to determine if the current object-underconsideration is the last in the Map is made, and if not, step 710, NO-path, the process loops back to processing the next 30 object. If the distance from the device to the current Object under consideration is greater than the radius of the circleof-incertitude, step 707, NO-path, the angle $°obj between the device heading 604 and the current Object direction is determined, step 711. 35
Next, the aperture angle, aob-, between the heading 604 and the extreme possible position P0 of an object in the extended vision at distance "d" from the device is determined, step 713. If the acquired Object was already in the set of old detected objects, SobJ_old, step 715, YES-path, 40 an angle hysteresis0 limit is increased, step 717. Hysteresis0 is an arbitrary parameter which is set to make already detected objects harder to lose. This means that an object will remain for a limited time even after it has left the current vision area. This avoids having a jitter of objects close to the 45 boundary of the vision area.
Next, step 715-NO path, if the angle fi°obj 711 is smaller than angle aob]- 713, step 719, YES-path, the current object is added to the detected object list, SobJ step 721. If not, step 719, NO-path, the process reruns the check step 710 until all 50 objects have been processed. Once all objects have been processed, step 710, YES-path, the list of detected objects, Sobj, is returned, step 723. For example, list is provided on a display screen of the mobile device 101 from which the user can choose a particular target object of all the objects 55 in the extended field of vision that are so listed. Effectively, once a specific Object is selected from the list, the focus of the mobile device 101 becomes any broadcast, e.g., URL beacon, of the selected Object.
Note that for objects are within the circle-of-incertitude, 60 that is within the resolution of the mobile device's position accuracy range, that the algorithm is automatically selfadjusting, having an adaptive fidelity aspect. Note also that from an optical sensing perspective, the algorithm equates to "seeing" all objects within range simultaneously with bin- 65 ocular vision, all objects being superimposed and having a width related to the distance (i.e., closer is bigger). That is,
for each object there is created a different field of vision from the same location.
In pseudo-code, the calculation can be described as follows:
<compute—Object (Map, Ldevice, Accuracy, v°, Sobj)
for each Object in Map,
d<—Object Distance from (Ldevice)
o.°obj*-SweepAngle (D, Accuracy, v°)
if (Object eSobj_old)
The aperture, a.°ob-, ranges from a minimum value, equal to the vision angle, "v°," up to 360°. By studying the equation, it can be recognized that the minimum aperture will appear when the object is far away from the device ("d" is relatively large and is relatively small), while the maximum aperture will appear when the object is close. The algorithm preserves perception of distance.
Thus, a compensation for tolerances inherent in state-ofthe-art locating and targeting is provided. Based upon known approximate position and heading of the spotter, and certain data related to objects with the a vision area, an expanded field of vision related to the perspective of a virtual spotter location is generated. Compensation for present location variance due to locator equipment inherent tolerance is provided such that targeting of each object in the vision area can be precisely implemented.
The foregoing description, illustrating certain embodiments and implementations, is not intended to be exhaustive nor to limit the invention to the precise form or to exemplary embodiments disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. Similarly, any process steps described might be interchangeable with other steps in order to achieve the same result. At least one embodiment was chosen and described in order to best explain the principles of the invention and its best mode practical application, thereby to enable others skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use or implementation contemplated. The scope of the invention can be determined from the claims appended hereto and their equivalents. Reference to an element in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather means "one or more." Moreover, no element, component, nor method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the following claims. No claim element herein is to be construed under the provisions of 31 U.S.C. Sec. 112, sixth paragraph, unless the element is expressly recited using the phrase "means for . . . " and no process step herein is to be construed under those provisions unless the step or steps are expressly recited using the phrase "comprising the step(s) of ..."
What is claimed is:
1. A positioning device comprising:
means for determining heading of the device;
means for determining current position of the device within a predetermined variance;