US20060208155A1 - Error corrected optical navigation system - Google Patents
Error corrected optical navigation system Download PDFInfo
- Publication number
- US20060208155A1 US20060208155A1 US11/083,796 US8379605A US2006208155A1 US 20060208155 A1 US20060208155 A1 US 20060208155A1 US 8379605 A US8379605 A US 8379605A US 2006208155 A1 US2006208155 A1 US 2006208155A1
- Authority
- US
- United States
- Prior art keywords
- optical
- displacement estimate
- navigation system
- estimate
- recited
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
- G06F3/0317—Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0354—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
- G06F3/03543—Mice or pucks
Definitions
- mouse One of the most common and, at the same time, useful input devices for user control of modern computer systems is the mouse.
- the main goal of a mouse as an input device is to translate the motion of an operator's hand into signals that the computer can use. This goal is accomplished by displaying on the screen of the computer's monitor a cursor which moves in response to the user's hand movement. Commands which can be selected by the user are typically keyed to the position of the cursor. The desired command can be selected by first placing the cursor, via movement of the mouse, at the appropriate location on the screen and then activating a button or switch on the mouse.
- Positional control of cursor placement on the monitor screen was initially obtained by mechanically detecting the relative movement of the mouse with respect to a fixed frame of reference, i.e., the top surface of a desk or a mouse pad.
- a common technique is to use a ball inside the mouse which in operation touches the desktop and rolls when the mouse moves. Inside the mouse there are two rollers which touch the ball and roll as the ball rolls. One of the rollers is oriented so that it detects motion in a nominal X direction, and the other is oriented 90 degrees to the first roller so it detects motion in the associated Y direction.
- the rollers are connected to separate shafts, and each shaft is connected to a separate optical encoder which outputs an electrical signal corresponding to movement of its associated roller. This signal is appropriately encoded and sent typically as binary data to the computer which in turn decodes the signal it received and moves the cursor on the computer screen by an amount corresponding to the physical movement of the mouse.
- optical navigation techniques have been used to produce the motion signals that are indicative of relative movement along the directions of coordinate axes. These techniques have been used, for instance, in optical computer mice and fingertip tracking devices to replace conventional mice and trackballs, again for the position control of screen pointers in windowed user interfaces for computer systems. Such techniques have several advantages, among which are the lack of moving parts that accumulate dirt and that suffer from mechanical wear when used.
- Distance measurement of movement of paper within a printer can be performed in different ways, depending on the situation. For printer applications, we can measure the distance moved by counting the number of steps taken by a stepper motor, because each step of the motor will move a certain known distance. Another alternative is to use an encoding wheel designed to measure relative motion of the surface whose motion causes the wheel to rotate. It is also possible to place marks on the paper that can be detected by sensors.
- Motion in a system using optical navigation techniques is measured by tracking the relative displacement of a series of images.
- a two dimensional view of an area of the reference surface is focused upon an array of photo detectors, whose outputs are digitized and stored as a reference image in a corresponding array of memory.
- a brief time later a second image is digitized. If there has been no motion, then the image obtained subsequent to the reference image and the reference image are essentially identical. If, on the other hand, there has been some motion, then the subsequent image will have been shifted along the axis of motion with the magnitude of the image shift corresponding to the magnitude of physical movement of the array of photosensors.
- the so called optical mouse used in place of the mechanical mouse for positional control in computer systems employ this technique.
- the direction and magnitude of movement of the optical mouse can be measured by comparing the reference image to a series of shifted versions of the second image.
- the shifted image corresponding best to the actual motion of the optical mouse is determined by performing a cross-correlation between the reference image and each of the shifted second images with the correct shift providing the largest correlation value. Subsequent images can be used to indicate subsequent movement of the optical mouse using the method just described.
- the image obtained which is to be compared with the reference image may no longer overlap the reference image to a degree sufficient to be able to accurately identify the motion that the mouse incurred. Before this situation can occur it is necessary for one of the subsequent images to be defined as a new reference image. This redefinition of the reference image is referred to as re-referencing.
- Measurement inaccuracy in optical navigation systems is a result of the manner in which such systems obtain their movement information.
- Optical navigation sensors operate by obtaining a series of images of an underlying surface. This surface has a micro texture. When this micro texture is illuminated (typically at an angle) by a light, the micro texture of the surface results in a pattern of shadows that is detected by the photosensor array. A sequence of images of these shadow patterns are obtained, and the optical navigation sensor attempts to calculate the relative motion of the surface that would account for changes in the image. Thus, if an image obtained at time t(n+1) is shifted left by one pixel relative to the image obtained at time t(n), then the optical navigation sensor most likely has been moved right by one pixel relative to the observed surface.
- any positional errors from the previous re-referencing procedure are accumulated.
- the amount of measurement error over a given distance is proportional to E*(N) 1/2 , where E is the error per reference frame change, and N is the number of reference frame updates.
- the optical navigation system comprises an image sensor capable of optical coupling to a surface of an object, a data storage device, and a navigation circuit.
- the data storage device is capable of storing successive images captured by the image sensor.
- the navigation circuit comprises a first digital circuit capable of determining an optical displacement estimate for a relative displacement between the image sensor and the object obtained by comparing the images, a second digital circuit capable of determining a mechanical displacement estimate for the relative displacement obtained from consideration of the mechanical characteristics of the optical navigation system, and a third digital circuit capable of determining an adjusted displacement estimate obtained from the optical displacement estimate and the mechanical displacement estimate.
- a method for error correcting an optical navigation system comprises optically coupling an image sensor to a surface of an object, capturing successive images with the image sensor of areas of the surface, storing the successive images, determining an optical displacement estimate for a relative displacement between the image sensor and the object by a comparison of the images, determining a mechanical displacement estimate for the relative displacement by consideration of a mechanical characteristic associated with the optical navigation system, and determining an adjusted displacement estimate from the optical displacement estimate and the mechanical displacement estimate.
- FIG. 1 is a drawing of a block diagram of an optical navigation system as described in various representative embodiments.
- FIG. 2A is a drawing of a navigation surface as described in various representative embodiments.
- FIG. 2B is another drawing of the navigation surface of FIG. 2A .
- FIG. 3 is a drawing of a printer with an optical navigation system as described in various representative embodiments.
- FIG. 4A is a plot of velocity versus time for the optical navigation system as described in various representative embodiments.
- FIG. 4B is a plot of distance moved versus time for the optical navigation system as described in various representative embodiments.
- FIG. 4C is another plot of distance moved versus time for the optical navigation system as described in various representative embodiments.
- FIG. 4D is still another plot of distance moved versus time for the optical navigation system as described in various representative embodiments.
- FIG. 4E is yet another plot of distance moved versus time for the optical navigation system as described in various representative embodiments.
- FIG. 5 is a plot of a mechanical displacement estimate versus an optical displacement estimate of the optical navigation system as described in various representative embodiments.
- FIG. 6A is a drawing of a more detailed block diagram of part of the optical navigation system of FIG. 1 .
- FIG. 6B is a drawing of a more detailed block diagram of a part of the optical navigation system of FIG. 6A .
- FIG. 7 is a flow chart of a method for using the optical navigation system as described in various representative embodiments.
- the present patent document discloses a novel optical navigation system.
- Previous systems capable of optical navigation have had limited accuracy in measuring distance.
- optical navigation systems are disclosed which provide for increased accuracy in the reported position of the optical navigation system.
- optical navigation sensors are used to detect the relative motion of an illuminated surface.
- an optical mouse detects the relative motion of a surface beneath the mouse and passes movement information to an associated computer.
- the movement information contains the direction and amount of movement. While the measurement of the amount of movement has been considered generally sufficient for purposes of moving a cursor, it may not be accurate enough for other applications, such as measurement of the movement of paper within a printer.
- optical navigation system can comprise an optical sensor such as an image sensor to track, for example, paper movement in a printer.
- Information regarding the mechanical profile of the motors driving the paper feed and/or print head mechanism, as well as any attached components, are combined with the optically derived positional information to obtain increased positional accuracy of the optical sensor system.
- Advantage is taken of the fact that motion typically occurs in only one axis at a time in a printer.
- the paper advance mechanism Since the paper advance mechanism has a certain mass and experiences a certain force (or rotational inertia and torque) during movement, the acceleration of the paper with respect to the navigation sensor is limited.
- the navigation sensor adds some random error to the basic motion profile. By taking the measurements of the navigation sensor from every image frame and fitting them to a motion profile formula or table consistent with the acceleration and velocity limits (or current known drive values), the true paper surface movement can be determined more accurately than with the optical navigation sensor alone.
- the motors providing motion in a printer are typically stepper motors or servo motors. They generally move either the paper or the print head at one time, but not both directions at the same time.
- the nature of our navigation algorithms is such that better accuracy can be obtained if motion occurs only in one direction at any given time.
- FIG. 1 is a drawing of a block diagram of an optical navigation system 100 as described in various representative embodiments.
- the optical navigation system 100 can be attached to or a part of another device, as for example a print head or other part of a printer 380 , an optical mouse 380 or the like.
- the optical navigation system 100 includes an image sensor 110 , also referred to herein as an image sensor array 110 , an optical system 120 , which could be a lens 120 or a lens system 120 , for focusing light reflected from a work piece 130 , also referred to herein as an object 130 which could be a print media 130 which could be a piece of paper 130 which is also referred to herein as a page 130 , onto the image sensor array 110 .
- Image sensor array 110 is preferably a complementary metal-oxide semiconductor (CMOS) image sensor. However, other imaging devices such as a charge coupled-device (CCD), photo diode array or photo transistor array may also be used.
- Light from light source 140 is reflected from print media 130 and onto image sensor array 110 via optical system 120 .
- the light source 140 shown in FIG. 1 could be a light emitting diode (LED).
- LED light emitting diode
- other light sources 140 can also be used including, for example, a vertical-cavity surface-emitting laser (VCSEL) or other laser, an incandescent light source, a fluorescent light source, or the like.
- VCSEL vertical-cavity surface-emitting laser
- relative movement occurs between the work piece 130 and the optical navigation system 100 with images 150 of the surface 160 , also referred to herein as a navigation surface 160 , of the work piece 130 being periodically taken as the relative movement occurs.
- relative movement is meant that movement of the optical navigation system 100 , in particular movement of the image sensor 110 , to the right over a stationary navigation surface 160 will result in navigational information equivalent to that which would be obtained if the object 130 were moved to the left under a stationary image sensor 110 .
- Movement direction 157 also referred to herein as a first direction 157 , in FIG. 1 indicates the direction that the optical navigation system 100 moves with respect to the stationary work piece 130 .
- the specific movement direction 157 shown in FIG. 1 is for illustrative purposes. Depending upon the application, the work piece 130 and/or the optical navigation system 100 may be capable of movement in multiple directions.
- the image sensor array 110 captures images 150 of the work piece 130 at a rate determined by the application and which may vary from time to time.
- the captured images 150 are representative of that portion of a navigation surface 160 , which could be a surface 160 of the piece of paper 130 , that is currently being traversed by the optical navigation system 100 .
- the captured image 150 is transferred to a navigation circuit 170 as image signal 155 and may be stored into a data storage device 180 , which could be a memory 180 .
- the navigation circuit 170 converts information in the image signal 155 into positional information that is delivered to the controller 190 , i.e., navigation circuit 170 generates positional signal 175 and outputs it to controller 190 . Controller 190 subsequently generates an output signal 195 that can be used to position a print head in the case of a printer application or other device as needed over the navigation surface 160 of the work piece 130 .
- the navigation circuit 170 and/or the memory 180 can be configured as an integral part of navigation circuit 170 or separate from it. Further, navigation circuit 170 can be implemented as, for example, but not limited to, a dedicated digital signal processor, an application specific integrated circuit, or a combination of logic gates.
- the optical navigation sensor must re-reference when the shift between the reference image and the current navigation image is more than a certain number of pixels, which might typically be 1 ⁇ 3 up to perhaps as much as 2 ⁇ 3 the sensor width. Assuming a 1 ⁇ 8 pixel standard deviation of positional random error, the cumulative error built-up in the system over a given travel will have a standard deviation of 1 ⁇ 8*(N) 1/2 where N is the number of re-references that occurred. In a typical optical mouse today, re-referencing occurs after a movement of 1 ⁇ 3 of the sensor width. Thus, for a typical image sensor array 110 having 20 ⁇ 20 pixels, a re-reference action is taken when a positional change of more than 6-pixels is detected. If we assume a 50 micron pixel size, the image sensor 110 will have to re-reference with every 300 micron travel. Based on the relation above, it is apparent that the cumulative error can be reduced by reducing the number of re-references.
- FIG. 2A is a drawing of a navigation surface 160 as described in various representative embodiments. This figure also shows an outline of the image 150 obtainable by the image sensor 110 from an area of the navigation surface 160 as described in various representative embodiments.
- the navigation surface 160 has a distinct surface characteristic or pattern.
- the surface pattern is represented by the alpha characters A . . . Z and a, also referred to herein as surface patterns A . . . Z and a.
- overlaying the navigation surface 160 is the outline of the image 150 obtainable by overlaying the navigation surface 160 with the image sensor array 110 to the far left of FIG. 2A . As such, if the image sensor 110 were positioned as shown in FIG.
- the image sensor 110 would be capable of capturing that area of the surface pattern of the navigation surface 160 represented by surface pattern A . . . I.
- the image sensor 110 has nine pixels 215 , also referred to herein as photosensitive elements 215 , whose capture areas are indicated as separated by the dashed vertical and horizontal lines and separately as first pixel 215 a overlaying navigation surface pattern A, second pixel 215 b overlaying navigation surface pattern B, third pixel 215 c overlaying navigation surface pattern C, fourth pixel 215 d overlaying navigation surface pattern D, fifth pixel 215 e overlaying navigation surface pattern E, sixth pixel 215 f overlaying navigation surface pattern F, seventh pixel 215 g overlaying navigation surface pattern G, eighth pixel 215 h overlaying navigation surface pattern H, and ninth pixel 215 i overlaying navigation surface pattern I.
- the captured image 150 represented by alpha characters A . . . I is the reference image 150 which is used to obtain navigational information resulting from subsequent relative motion between the navigation surface 160 and the image sensor array 110 .
- relative motion is meant that subsequent movement of the image sensor 110 to the right (movement direction 157 ) over a stationary navigation surface 160 will result in navigational information equivalent to that which would be obtained if the navigation surface 160 moved to the right under a stationary image sensor 110 .
- FIG. 2B is another drawing of the navigation surface 160 of FIG. 2A .
- This figure shows the outline of the image 150 obtainable by the image sensor 110 in multiple positions relative to the navigation surface 160 of FIG. 2A .
- overlaying the navigation surface 160 is the outline of the image 150 obtainable by overlaying the navigation surface 160 with the image sensor array 110 in the reference position of FIG. 2A , as well as at positions following three separate movements of the image sensor 110 to the right (or equivalently following three separate movements of the navigation surface 160 to the left).
- the reference image is indicated as initial reference image 150 ( 0 ), and reference images following subsequent movements as image 150 ( 1 ), as image 150 ( 2 ), and as image 150 ( 3 ).
- the image 150 capable of capture by the image sensor 110 is image 150 ( 1 ) which comprises surface patterns G-O.
- Image 150 ( 1 ) which comprises surface patterns G-O.
- Intermediate movements between that of images 150 (O) and 150 ( 1 ) with associated capture of images 150 may also be performed but for ease and clarity of illustration are not shown in FIG. 2B . Regardless, a re-referencing would be necessary with image 150 ( 1 ) now becoming the new reference image 150 , otherwise positional reference information would be lost.
- the image 150 capable of capture by the image sensor 110 is image 150 ( 2 ) which comprises surface patterns M-U.
- image 150 ( 2 ) which comprises surface patterns M-U.
- Intermediate movements between that of images 150 ( 1 ) and 150 ( 2 ) with associated capture of images 150 may also be performed but for ease and clarity of illustration are not shown in FIG. 2B . Regardless, a re-referencing would be necessary with image 150 ( 2 ) now becoming the new reference image 150 , otherwise positional reference information would be lost.
- the image 150 capable of capture by the image sensor 110 is image 150 ( 3 ) which comprises surface patterns S-Z and a.
- image 150 ( 3 ) which comprises surface patterns S-Z and a.
- Intermediate movements between that of images 150 ( 2 ) and 150 ( 3 ) with associated capture of images 150 may also be performed but for ease and clarity of illustration are not shown in FIG. 2B . Regardless, a re-referencing would be necessary with image 150 ( 3 ) now becoming the new reference image 150 , otherwise positional reference information would be lost.
- FIG. 3 is a drawing of a printer 380 with the optical navigation system 100 as described in various representative embodiments.
- a piece of paper 130 is shown placed on a platen 310 .
- Rollers 320 hold the page 130 against the platen 310 .
- Appropriate rotation of the rollers 320 moves the page 130 back and forth parallel to the Y-axis.
- a print head 330 is driven back and forth parallel to the X-axis along a rod 340 mounted between supports 350 .
- Appropriate movement of the print head 330 places the print head 330 at a selected location over the page 130 for dispersing ink from the print head 330 to the page 130 .
- other devices 330 besides the print head 330 can be attached to the optical navigation system 100 .
- FIG. 4A is a plot of velocity versus time for the optical navigation system 100 as described in various representative embodiments.
- v(t) ⁇ [a(t)*dt].
- FIG. 4A assumes for illustrative purposes that the acceleration of the print head 330 is constant from initiation at time T 0 until time T 1 . During the time period T 0 to T 1 , the velocity in FIG. 4A is indicated by the solid line 401 .
- a maximum velocity V M is reached and the acceleration force on the print head 330 is removed with any remaining force only matching the resistive forces in the system.
- the velocity is indicated by the solid line 402 .
- the print head 330 then travels at the constant velocity V M until time T 2 when the print head 330 begins to decelerate.
- Deceleration occurs again at an assumed constant rate which again for purposes of illustration only is assumed to be at the same rate as that of acceleration between times T 0 and T 1 .
- the velocity is indicated by the solid line 403 .
- Time T 0 represents the time that movement of the print head 330 and optical navigation system 100 begin to move from a rest position. Typically for a printer 380 this movement at time T 0 is either in the X-direction or in the Y-direction, but not both at the same time. However, this is a choice of the application designer and not a constraint on the optical navigation system 100 . Also, the choices of a constant acceleration and constant deceleration are made herein for ease of illustration. Other acceleration and velocity profiles are possible in any given application and may be dictated by the physical constraints of the application but not of the concepts of the representative embodiments herein.
- time T F determines the stopping location of the print head 330 and the optical navigation system 100 .
- V M the maximum velocity of the print head 330 and the optical navigation system 100 .
- movement at the maximum velocity V M may occur at a time greater or less than T 2 , in which case deceleration will occur along dashed lines 405 with final stopping times less than or greater than T F shown in FIG. 4A and corresponding stopping positions.
- FIG. 4B is a plot of distance moved versus time for the optical navigation system 100 as described in various representative embodiments.
- distance moved vs. time is plotted, as an example, for the print head 330 and optical navigation system 100 as in FIG. 3 for the acceleration phase during the times T 0 to T 1 .
- the print head 330 and optical navigation system 100 move from location X 0 at time T 0 to location X 1 at time T 1 .
- FIG. 4C is another plot of distance moved versus time for the optical navigation system 100 as described in various representative embodiments.
- distance moved vs. time is plotted, again as an example, for the print head 330 and optical navigation system 100 as in FIG. 3 for the constant velocity phase during times T 1 to T 2 .
- the print head 330 and optical navigation system 100 move from location X 1 at time T 1 to location X 2 at time T 2 .
- FIG. 4D is still another plot of distance moved versus time for the optical navigation system 100 as described in various representative embodiments.
- distance moved vs. time is plotted, and yet again as an example, for the print head 330 and optical navigation system 100 as in FIG. 3 for the constant velocity phase during times T 2 to T F .
- the print head 330 and optical navigation system 100 move from location X 2 at time T 2 to location X F at time T F .
- FIG. 4E is yet another plot of distance moved versus time for the optical navigation system 100 as described in various representative embodiments.
- FIG. 4E is a composite plot of FIGS. 4B-4D . Note that at any position on FIGS. 4B-4E , the slope of the plot is equal to the velocity of the print head 330 and optical navigation system 100 at that time. Also, note that the slope of the respective plots is zero at times T 0 and T F corresponding to a zero velocity and that the slopes of the respective plots do not exhibit discontinuities at either T 1 between FIGS. 4B and 4C or at T 2 between FIGS. 4C and 4D .
- FIG. 5 is a plot of a mechanical displacement estimate 510 versus an optical displacement estimate 520 of the optical navigation system 100 as described in various representative embodiments.
- the ideal case with zero error in both the mechanical displacement estimate 510 and the optical displacement estimate 520 is shown by the solid, straight line rising at 45 degrees to the right in FIG. 5 .
- Dashed lines parallel to the solid 45 degree line represent an error range 530 defined by the designer or user as being acceptable.
- the mechanical displacement estimate 510 is X MECH and the optical displacement estimate 520 is X OPT which have intersection at point E. Note that in the representative example of FIG. 5 , point E lies outside the acceptable error range 530 .
- the displacement estimate which the system reports could be adjusted by taking the projection of the intersection of X MECH and the upper error range 45 degree line on the axis of optical displacement estimate 520 as an adjusted displacement estimate 540 indicated in FIG. 5 as X CORR .
- X MECH could be used as a maximum for X OPT .
- a minimum value for X MECH could also be determined for setting a minimum value for X OPT .
- more complex velocity/acceleration profiles could also be used and may, in fact, be dictated by the application.
- the data may be theoretically or empirically determined.
- the motion profile may be accessed via equations, via tables of numbers representing the profile, or via other acceptable means.
- FIG. 6A is a drawing of a more detailed block diagram of part of the optical navigation system of FIG. 1 .
- the navigation circuit 170 comprises an optical displacement estimation digital circuit 371 , also referred to herein as a first digital circuit 371 , for determining the optical displacement estimate 520 between the image sensor 110 and the object 130 obtained by comparing the image 150 captured at a given time to the image 150 captured at a subsequent time, a mechanical displacement estimation digital circuit 372 , also referred to herein as a second digital circuit 372 , for determining the mechanical displacement estimate 510 obtained from consideration of the mechanical characteristics of the optical navigation system 100 , and a displacement estimation adjustment digital circuit 373 , also referred to herein as a third digital circuit 373 , for determining an adjusted displacement estimate 540 based on the optical displacement estimate 520 and the mechanical displacement estimate 510 .
- a displacement may or may not occur between the times corresponding to the capture of the two images 150 .
- FIG. 6B is a drawing of a more detailed block diagram of a part of the optical navigation system of FIG. 6A .
- the optical displacement estimation digital circuit 371 comprises an image shift digital circuit 374 , also referred to herein as a fourth digital circuit 374 , for performing multiple shifts in one of the images 150 , a shift comparison digital circuit 375 , also referred to herein as a fifth digital circuit 375 , for performing a comparison, which could be a cross-correlation comparison, between one of the other images 150 and the shifted multiple images 150 , a displacement estimation computation digital circuit 376 , also referred to herein as a sixth digital circuit 376 , for using shift information for the shifted image 150 , which information could be used in comparisons employing cross-correlations to the one other image 150 , to compute the estimate of the relative displacement between the image sensor 110 and the object 130 , and an image specification digital circuit 377 , also referred to herein as a seventh digital circuit 377 , for specify
- Some integrated circuits such as the Agilent ADNS-2030 which is used in optical mice, use a technique called “prediction” that reduces the amount of computation needed for cross correlation.
- an optical mouse could work by doing every possible cross-correlation of images (i.e., shift of 1 pixel in all directions, shift of 2 pixels in all directions, etc.) for any given pair of images.
- the problem with this is that as the number of shifts considered increases, the needed computations increase even faster. For example, for a 9 ⁇ 9 pixel optical mouse there are only 9 possible positions considering a maximum shift of 1 pixel (8 shifted by 1 pixel and one for no movement), but there are 25 possible positions for a maximum considered shift of 2 pixels, and so forth.
- Prediction decreases the amount of computation by pre-shifting one of the images based on an estimated mouse velocity to attempt to overlap the images exactly.
- the maximum amount of shift between the two images is smaller because the shift is related to the error in the prediction process rather than the absolute velocity of the mouse. Consequently, less computation is required.
- FIG. 7 is a flow chart of a method 600 for using the optical navigation system 100 as described in various representative embodiments.
- an image 150 of an area of the surface 160 of the work piece 130 is captured by the optical navigation system 100 . This image 150 is captured without prior knowledge of whether the optical navigation system 100 with the print head 330 is stationary or has been moved to a new location. Block 610 then transfers control to block 620 .
- an expected new position the optical displacement estimate X OPT of FIG. 5 , is obtained by comparing successive images 150 of areas of the page 130 captured by the image sensor 110 as described above. Block 620 then transfers control to block 630 .
- an expected new position based on mechanical profiles of the system, the mechanical displacement estimate X MECH of FIG. 5 is obtained by, for example, the techniques described above with respect to FIGS. 4A-4E .
- Block 630 then transfers control to block 640 .
- block 640 if the theoretical mechanical location X MECH lies outside the acceptable error range 530 , block 640 transfers control to block 650 . Otherwise, block 640 transfers control to block 660 .
- Block 650 the optical displacement estimate X OPT is corrected by knowledge of X MECH to adjusted displacement estimate X CORR .
- Block 650 then transfers control to block 660 .
- block 660 the new position based on an initial starting location of the optical navigation system 100 is reported to the controller 190 .
- the actual value reported depends upon the path taken to block 660 .
- the reported value is that of the value of unadjusted optical displacement estimate X OPT if control is passed fro block 640 to block 660 . Otherwise, the reported value is that of the value of the adjusted optical displacement estimate X OPT .
- Block 660 then transfers control back to block 610 .
- Motion computations are performed for an optical navigation system in a way that is altered by knowledge of the movement profile of the device for which motion is being measured. By noting the direction of intended motion and the plausible acceleration and velocity profiles that the object is expected to undergo, it is possible to increase the accuracy of positional determination obtained from the cross-correlation based image motion measurements normally used by an optical mouse.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
- The subject matter of the instant application is related to that of U.S. Pat. No. 6,433,780 by Gordon et al., entitled “Seeing Eye Mouse for a Computer System” issued 13 Aug. 2002 and assigned to Agilent Technologies, Inc. This patent describes a basic technique for reducing the amount of computation needed for cross-correlation, which techniques include components of the representative embodiments described below. Accordingly, U.S. Pat. No. 6,433,780 is hereby incorporated herein by reference.
- One of the most common and, at the same time, useful input devices for user control of modern computer systems is the mouse. The main goal of a mouse as an input device is to translate the motion of an operator's hand into signals that the computer can use. This goal is accomplished by displaying on the screen of the computer's monitor a cursor which moves in response to the user's hand movement. Commands which can be selected by the user are typically keyed to the position of the cursor. The desired command can be selected by first placing the cursor, via movement of the mouse, at the appropriate location on the screen and then activating a button or switch on the mouse.
- Positional control of cursor placement on the monitor screen was initially obtained by mechanically detecting the relative movement of the mouse with respect to a fixed frame of reference, i.e., the top surface of a desk or a mouse pad. A common technique is to use a ball inside the mouse which in operation touches the desktop and rolls when the mouse moves. Inside the mouse there are two rollers which touch the ball and roll as the ball rolls. One of the rollers is oriented so that it detects motion in a nominal X direction, and the other is oriented 90 degrees to the first roller so it detects motion in the associated Y direction. The rollers are connected to separate shafts, and each shaft is connected to a separate optical encoder which outputs an electrical signal corresponding to movement of its associated roller. This signal is appropriately encoded and sent typically as binary data to the computer which in turn decodes the signal it received and moves the cursor on the computer screen by an amount corresponding to the physical movement of the mouse.
- More recently, optical navigation techniques have been used to produce the motion signals that are indicative of relative movement along the directions of coordinate axes. These techniques have been used, for instance, in optical computer mice and fingertip tracking devices to replace conventional mice and trackballs, again for the position control of screen pointers in windowed user interfaces for computer systems. Such techniques have several advantages, among which are the lack of moving parts that accumulate dirt and that suffer from mechanical wear when used.
- Distance measurement of movement of paper within a printer can be performed in different ways, depending on the situation. For printer applications, we can measure the distance moved by counting the number of steps taken by a stepper motor, because each step of the motor will move a certain known distance. Another alternative is to use an encoding wheel designed to measure relative motion of the surface whose motion causes the wheel to rotate. It is also possible to place marks on the paper that can be detected by sensors.
- Motion in a system using optical navigation techniques is measured by tracking the relative displacement of a series of images. First, a two dimensional view of an area of the reference surface is focused upon an array of photo detectors, whose outputs are digitized and stored as a reference image in a corresponding array of memory. A brief time later a second image is digitized. If there has been no motion, then the image obtained subsequent to the reference image and the reference image are essentially identical. If, on the other hand, there has been some motion, then the subsequent image will have been shifted along the axis of motion with the magnitude of the image shift corresponding to the magnitude of physical movement of the array of photosensors. The so called optical mouse used in place of the mechanical mouse for positional control in computer systems employ this technique.
- In practice, the direction and magnitude of movement of the optical mouse can be measured by comparing the reference image to a series of shifted versions of the second image. The shifted image corresponding best to the actual motion of the optical mouse is determined by performing a cross-correlation between the reference image and each of the shifted second images with the correct shift providing the largest correlation value. Subsequent images can be used to indicate subsequent movement of the optical mouse using the method just described.
- At some point in the movement of the optical mouse, however, the image obtained which is to be compared with the reference image may no longer overlap the reference image to a degree sufficient to be able to accurately identify the motion that the mouse incurred. Before this situation can occur it is necessary for one of the subsequent images to be defined as a new reference image. This redefinition of the reference image is referred to as re-referencing.
- Measurement inaccuracy in optical navigation systems is a result of the manner in which such systems obtain their movement information. Optical navigation sensors operate by obtaining a series of images of an underlying surface. This surface has a micro texture. When this micro texture is illuminated (typically at an angle) by a light, the micro texture of the surface results in a pattern of shadows that is detected by the photosensor array. A sequence of images of these shadow patterns are obtained, and the optical navigation sensor attempts to calculate the relative motion of the surface that would account for changes in the image. Thus, if an image obtained at time t(n+1) is shifted left by one pixel relative to the image obtained at time t(n), then the optical navigation sensor most likely has been moved right by one pixel relative to the observed surface.
- As long as the reference frame and current frame overlap by a sufficient amount, movement can be calculated with sub-pixel accuracy. However, a problem occurs when an insufficient overlap occurs between the reference frame and the current frame, as movement cannot be determined accurately in this case. To prevent this problem, a new reference frame is selected whenever overlap between the reference frame and the current frame is less than some threshold. However, because of noise in the optical sensor array, the sensor will have some amount of error introduced into the measurement of the amount of movement each time the reference frame is changed. Thus, as the size of the measured movement increases, the amount of error will increase as more and more new reference frames are selected.
- Due to the lack of absolute positional reference, at each re-referencing, any positional errors from the previous re-referencing procedure are accumulated. When the optical mouse sensor travels over a long distance, the total cumulative position error built up can be significant. If the photosensor array is 30×30, re-referencing may need occur each time the mouse moves 15 pixels or so (15 pixels at 60 microns per pixel=one reference frame update every 0.9 mm). The amount of measurement error over a given distance is proportional to E*(N)1/2, where E is the error per reference frame change, and N is the number of reference frame updates.
- An optical navigation system. The optical navigation system comprises an image sensor capable of optical coupling to a surface of an object, a data storage device, and a navigation circuit. The data storage device is capable of storing successive images captured by the image sensor. The navigation circuit comprises a first digital circuit capable of determining an optical displacement estimate for a relative displacement between the image sensor and the object obtained by comparing the images, a second digital circuit capable of determining a mechanical displacement estimate for the relative displacement obtained from consideration of the mechanical characteristics of the optical navigation system, and a third digital circuit capable of determining an adjusted displacement estimate obtained from the optical displacement estimate and the mechanical displacement estimate.
- In another representative embodiment, a method for error correcting an optical navigation system is disclosed. The method steps comprise optically coupling an image sensor to a surface of an object, capturing successive images with the image sensor of areas of the surface, storing the successive images, determining an optical displacement estimate for a relative displacement between the image sensor and the object by a comparison of the images, determining a mechanical displacement estimate for the relative displacement by consideration of a mechanical characteristic associated with the optical navigation system, and determining an adjusted displacement estimate from the optical displacement estimate and the mechanical displacement estimate.
- Other aspects and advantages of the representative embodiments presented herein will become apparent from the following detailed description, taken in conjunction with the accompanying drawings.
- The accompanying drawings provide visual representations which will be used to more fully describe various representative embodiments and can be used by those skilled in the art to better understand them and their inherent advantages. In these drawings, like reference numerals identify corresponding elements.
-
FIG. 1 is a drawing of a block diagram of an optical navigation system as described in various representative embodiments. -
FIG. 2A is a drawing of a navigation surface as described in various representative embodiments. -
FIG. 2B is another drawing of the navigation surface ofFIG. 2A . -
FIG. 3 is a drawing of a printer with an optical navigation system as described in various representative embodiments. -
FIG. 4A is a plot of velocity versus time for the optical navigation system as described in various representative embodiments. -
FIG. 4B is a plot of distance moved versus time for the optical navigation system as described in various representative embodiments. -
FIG. 4C is another plot of distance moved versus time for the optical navigation system as described in various representative embodiments. -
FIG. 4D is still another plot of distance moved versus time for the optical navigation system as described in various representative embodiments. -
FIG. 4E is yet another plot of distance moved versus time for the optical navigation system as described in various representative embodiments. -
FIG. 5 is a plot of a mechanical displacement estimate versus an optical displacement estimate of the optical navigation system as described in various representative embodiments. -
FIG. 6A is a drawing of a more detailed block diagram of part of the optical navigation system ofFIG. 1 . -
FIG. 6B is a drawing of a more detailed block diagram of a part of the optical navigation system ofFIG. 6A . -
FIG. 7 is a flow chart of a method for using the optical navigation system as described in various representative embodiments. - As shown in the drawings for purposes of illustration, the present patent document discloses a novel optical navigation system. Previous systems capable of optical navigation have had limited accuracy in measuring distance. In representative embodiments, optical navigation systems are disclosed which provide for increased accuracy in the reported position of the optical navigation system.
- In the following detailed description and in the several figures of the drawings, like elements are identified with like reference numerals.
- As previously indicated, optical navigation sensors are used to detect the relative motion of an illuminated surface. In particular, an optical mouse detects the relative motion of a surface beneath the mouse and passes movement information to an associated computer. The movement information contains the direction and amount of movement. While the measurement of the amount of movement has been considered generally sufficient for purposes of moving a cursor, it may not be accurate enough for other applications, such as measurement of the movement of paper within a printer.
- Due to the lack of absolute positional reference, at each re-referencing, any positional errors from the previous re-referencing procedure are permanently built into the system. As the mouse sensor travels over a long distance, the total cumulative position error built up can be significant, especially in printer and other applications.
- In representative embodiments disclosed herein, information from an optical navigation system and information regarding the mechanical acceleration/velocity profile of the system are combined to provide a more accurate estimate of the true position of the optical navigation system. The optical navigation system can comprise an optical sensor such as an image sensor to track, for example, paper movement in a printer. Information regarding the mechanical profile of the motors driving the paper feed and/or print head mechanism, as well as any attached components, are combined with the optically derived positional information to obtain increased positional accuracy of the optical sensor system. Advantage is taken of the fact that motion typically occurs in only one axis at a time in a printer. Since the paper advance mechanism has a certain mass and experiences a certain force (or rotational inertia and torque) during movement, the acceleration of the paper with respect to the navigation sensor is limited. The navigation sensor adds some random error to the basic motion profile. By taking the measurements of the navigation sensor from every image frame and fitting them to a motion profile formula or table consistent with the acceleration and velocity limits (or current known drive values), the true paper surface movement can be determined more accurately than with the optical navigation sensor alone.
- High frequency random position errors can be smoothed out since they aren't physically plausible given the inertia of the system. However, absolute position accuracy over a longer time period can still be obtained from the navigation sensor. Positional jitter would be greatly reduced by this technique when the paper is not moving by essentially averaging out the random sensor errors.
- In addition, it is possible to alter the re-referencing strategy based on which direction the paper is currently moving. When the paper is moving along the X-axis, new references could be acquired for image shifts of, for example, ½ array size in X and not consider Y-axis shifts in determining when to take new references. The fractional overlap threshold for re-referencing could also be changed differently for the X-axis and the Y-axis based on speed and accuracy expectations of the 2 axes.
- It is also possible to determine the sensor position at times between images. For example, suppose images are taken 10,000 times per second and the paper is moving at about 1 meter per second. Images are then taken approximately every 100 microns of movement. If the print head needs firing pulses every 25 microns of movement, these times can be calculated by the best fit motion profile from the combined navigation sensor and motor drive information, even though no pictures are taken at that time.
- The motors providing motion in a printer are typically stepper motors or servo motors. They generally move either the paper or the print head at one time, but not both directions at the same time. The nature of our navigation algorithms is such that better accuracy can be obtained if motion occurs only in one direction at any given time.
-
FIG. 1 is a drawing of a block diagram of anoptical navigation system 100 as described in various representative embodiments. Theoptical navigation system 100 can be attached to or a part of another device, as for example a print head or other part of aprinter 380, anoptical mouse 380 or the like. InFIG. 1 , theoptical navigation system 100 includes animage sensor 110, also referred to herein as animage sensor array 110, anoptical system 120, which could be alens 120 or alens system 120, for focusing light reflected from awork piece 130, also referred to herein as anobject 130 which could be aprint media 130 which could be a piece ofpaper 130 which is also referred to herein as apage 130, onto theimage sensor array 110. Illumination of theprint media 130 is provided bylight source 140.Image sensor array 110 is preferably a complementary metal-oxide semiconductor (CMOS) image sensor. However, other imaging devices such as a charge coupled-device (CCD), photo diode array or photo transistor array may also be used. Light fromlight source 140 is reflected fromprint media 130 and ontoimage sensor array 110 viaoptical system 120. Thelight source 140 shown inFIG. 1 could be a light emitting diode (LED). However, otherlight sources 140 can also be used including, for example, a vertical-cavity surface-emitting laser (VCSEL) or other laser, an incandescent light source, a fluorescent light source, or the like. Additionally, it is possible for ambientlight sources 140 external to theoptical navigation system 100 to be used provided the resulting light level is sufficient to meet the sensitivity threshold requirements of theimage sensor array 110. - In operation, relative movement occurs between the
work piece 130 and theoptical navigation system 100 withimages 150 of thesurface 160, also referred to herein as anavigation surface 160, of thework piece 130 being periodically taken as the relative movement occurs. By relative movement is meant that movement of theoptical navigation system 100, in particular movement of theimage sensor 110, to the right over astationary navigation surface 160 will result in navigational information equivalent to that which would be obtained if theobject 130 were moved to the left under astationary image sensor 110.Movement direction 157, also referred to herein as afirst direction 157, inFIG. 1 indicates the direction that theoptical navigation system 100 moves with respect to thestationary work piece 130. Thespecific movement direction 157 shown inFIG. 1 is for illustrative purposes. Depending upon the application, thework piece 130 and/or theoptical navigation system 100 may be capable of movement in multiple directions. - The
image sensor array 110 capturesimages 150 of thework piece 130 at a rate determined by the application and which may vary from time to time. The capturedimages 150 are representative of that portion of anavigation surface 160, which could be asurface 160 of the piece ofpaper 130, that is currently being traversed by theoptical navigation system 100. The capturedimage 150 is transferred to anavigation circuit 170 asimage signal 155 and may be stored into adata storage device 180, which could be amemory 180. - The
navigation circuit 170 converts information in theimage signal 155 into positional information that is delivered to thecontroller 190, i.e.,navigation circuit 170 generates positional signal 175 and outputs it tocontroller 190.Controller 190 subsequently generates anoutput signal 195 that can be used to position a print head in the case of a printer application or other device as needed over thenavigation surface 160 of thework piece 130. Thenavigation circuit 170 and/or thememory 180 can be configured as an integral part ofnavigation circuit 170 or separate from it. Further,navigation circuit 170 can be implemented as, for example, but not limited to, a dedicated digital signal processor, an application specific integrated circuit, or a combination of logic gates. - The optical navigation sensor must re-reference when the shift between the reference image and the current navigation image is more than a certain number of pixels, which might typically be ⅓ up to perhaps as much as ⅔ the sensor width. Assuming a ⅛ pixel standard deviation of positional random error, the cumulative error built-up in the system over a given travel will have a standard deviation of ⅛*(N)1/2 where N is the number of re-references that occurred. In a typical optical mouse today, re-referencing occurs after a movement of ⅓ of the sensor width. Thus, for a typical
image sensor array 110 having 20×20 pixels, a re-reference action is taken when a positional change of more than 6-pixels is detected. If we assume a 50 micron pixel size, theimage sensor 110 will have to re-reference with every 300 micron travel. Based on the relation above, it is apparent that the cumulative error can be reduced by reducing the number of re-references. -
FIG. 2A is a drawing of anavigation surface 160 as described in various representative embodiments. This figure also shows an outline of theimage 150 obtainable by theimage sensor 110 from an area of thenavigation surface 160 as described in various representative embodiments. InFIG. 2A , thenavigation surface 160 has a distinct surface characteristic or pattern. In this example, for purposes of illustration the surface pattern is represented by the alpha characters A . . . Z and a, also referred to herein as surface patterns A . . . Z and a. As just stated, overlaying thenavigation surface 160 is the outline of theimage 150 obtainable by overlaying thenavigation surface 160 with theimage sensor array 110 to the far left ofFIG. 2A . As such, if theimage sensor 110 were positioned as shown inFIG. 2A over thenavigation surface 160, theimage sensor 110 would be capable of capturing that area of the surface pattern of thenavigation surface 160 represented by surface pattern A . . . I. For the representative embodiment ofFIG. 2A , theimage sensor 110 has nine pixels 215, also referred to herein as photosensitive elements 215, whose capture areas are indicated as separated by the dashed vertical and horizontal lines and separately asfirst pixel 215 a overlaying navigation surface pattern A,second pixel 215 b overlaying navigation surface pattern B,third pixel 215 c overlaying navigation surface pattern C,fourth pixel 215 d overlaying navigation surface pattern D, fifth pixel 215 e overlaying navigation surface pattern E,sixth pixel 215 f overlaying navigation surface pattern F,seventh pixel 215 g overlaying navigation surface pattern G,eighth pixel 215 h overlaying navigation surface pattern H, andninth pixel 215 i overlaying navigation surface pattern I. For navigational purposes, the capturedimage 150 represented by alpha characters A . . . I is thereference image 150 which is used to obtain navigational information resulting from subsequent relative motion between thenavigation surface 160 and theimage sensor array 110. By relative motion is meant that subsequent movement of theimage sensor 110 to the right (movement direction 157) over astationary navigation surface 160 will result in navigational information equivalent to that which would be obtained if thenavigation surface 160 moved to the right under astationary image sensor 110. -
FIG. 2B is another drawing of thenavigation surface 160 ofFIG. 2A . This figure shows the outline of theimage 150 obtainable by theimage sensor 110 in multiple positions relative to thenavigation surface 160 ofFIG. 2A . Also shown inFIG. 2B overlaying thenavigation surface 160 is the outline of theimage 150 obtainable by overlaying thenavigation surface 160 with theimage sensor array 110 in the reference position ofFIG. 2A , as well as at positions following three separate movements of theimage sensor 110 to the right (or equivalently following three separate movements of thenavigation surface 160 to the left). InFIG. 2A , the reference image is indicated as initial reference image 150(0), and reference images following subsequent movements as image 150(1), as image 150(2), and as image 150(3). - Following the first movement, the
image 150 capable of capture by theimage sensor 110 is image 150(1) which comprises surface patterns G-O. Intermediate movements between that of images 150(O) and 150(1) with associated capture ofimages 150 may also be performed but for ease and clarity of illustration are not shown inFIG. 2B . Regardless, a re-referencing would be necessary with image 150(1) now becoming thenew reference image 150, otherwise positional reference information would be lost. - Following the second movement, the
image 150 capable of capture by theimage sensor 110 is image 150(2) which comprises surface patterns M-U. Intermediate movements between that of images 150(1) and 150(2) with associated capture ofimages 150 may also be performed but for ease and clarity of illustration are not shown inFIG. 2B . Regardless, a re-referencing would be necessary with image 150(2) now becoming thenew reference image 150, otherwise positional reference information would be lost. - Following the third movement, the
image 150 capable of capture by theimage sensor 110 is image 150(3) which comprises surface patterns S-Z and a. Intermediate movements between that of images 150(2) and 150(3) with associated capture ofimages 150 may also be performed but for ease and clarity of illustration are not shown inFIG. 2B . Regardless, a re-referencing would be necessary with image 150(3) now becoming thenew reference image 150, otherwise positional reference information would be lost. -
FIG. 3 is a drawing of aprinter 380 with theoptical navigation system 100 as described in various representative embodiments. InFIG. 3 , a piece ofpaper 130 is shown placed on aplaten 310.Rollers 320 hold thepage 130 against theplaten 310. Appropriate rotation of therollers 320 moves thepage 130 back and forth parallel to the Y-axis. Aprint head 330 is driven back and forth parallel to the X-axis along arod 340 mounted between supports 350. Appropriate movement of theprint head 330 places theprint head 330 at a selected location over thepage 130 for dispersing ink from theprint head 330 to thepage 130. In other applications,other devices 330 besides theprint head 330 can be attached to theoptical navigation system 100. -
FIG. 4A is a plot of velocity versus time for theoptical navigation system 100 as described in various representative embodiments. For the representative case of constant acceleration, velocity is determined as a function of time by the following equation: v(t)=a*t wherein v is the velocity, a is the acceleration, and t is time as measured from the initiation of the acceleration. The more general relationship is v(t)=∫[a(t)*dt].FIG. 4A assumes for illustrative purposes that the acceleration of theprint head 330 is constant from initiation at time T0 until time T1. During the time period T0 to T1, the velocity inFIG. 4A is indicated by thesolid line 401. At time T1 a maximum velocity VM is reached and the acceleration force on theprint head 330 is removed with any remaining force only matching the resistive forces in the system. During the time period T1 to T 2, the velocity is indicated by thesolid line 402. Theprint head 330 then travels at the constant velocity VM until time T2 when theprint head 330 begins to decelerate. - Deceleration occurs again at an assumed constant rate which again for purposes of illustration only is assumed to be at the same rate as that of acceleration between times T0 and T1. During the time period T2 to TF, the velocity is indicated by the
solid line 403. Time T0 represents the time that movement of theprint head 330 andoptical navigation system 100 begin to move from a rest position. Typically for aprinter 380 this movement at time T0 is either in the X-direction or in the Y-direction, but not both at the same time. However, this is a choice of the application designer and not a constraint on theoptical navigation system 100. Also, the choices of a constant acceleration and constant deceleration are made herein for ease of illustration. Other acceleration and velocity profiles are possible in any given application and may be dictated by the physical constraints of the application but not of the concepts of the representative embodiments herein. - Given the physical parameters of the system, i.e., acceleration and deceleration characteristics as well as the maximum velocity, time TF determines the stopping location of the
print head 330 and theoptical navigation system 100. Should the velocity never reach the maximum velocity VM, deceleration will occur along dashedlines 404 as appropriate with the termination time again determining the stopping location of theprint head 330 and theoptical navigation system 100. Also, movement at the maximum velocity VM may occur at a time greater or less than T2, in which case deceleration will occur along dashedlines 405 with final stopping times less than or greater than TF shown inFIG. 4A and corresponding stopping positions. -
FIG. 4B is a plot of distance moved versus time for theoptical navigation system 100 as described in various representative embodiments. InFIG. 4B distance moved vs. time is plotted, as an example, for theprint head 330 andoptical navigation system 100 as inFIG. 3 for the acceleration phase during the times T0 to T1. Theprint head 330 andoptical navigation system 100 move from location X0 at time T0 to location X1 at time T1. -
FIG. 4C is another plot of distance moved versus time for theoptical navigation system 100 as described in various representative embodiments. InFIG. 4C distance moved vs. time is plotted, again as an example, for theprint head 330 andoptical navigation system 100 as inFIG. 3 for the constant velocity phase during times T1 to T 2. Theprint head 330 andoptical navigation system 100 move from location X1 at time T1 to location X2 at time T2. -
FIG. 4D is still another plot of distance moved versus time for theoptical navigation system 100 as described in various representative embodiments. InFIG. 4D distance moved vs. time is plotted, and yet again as an example, for theprint head 330 andoptical navigation system 100 as inFIG. 3 for the constant velocity phase during times T2 to TF. Theprint head 330 andoptical navigation system 100 move from location X2 at time T2 to location XF at time TF. -
FIG. 4E is yet another plot of distance moved versus time for theoptical navigation system 100 as described in various representative embodiments.FIG. 4E is a composite plot ofFIGS. 4B-4D . Note that at any position onFIGS. 4B-4E , the slope of the plot is equal to the velocity of theprint head 330 andoptical navigation system 100 at that time. Also, note that the slope of the respective plots is zero at times T0 and TF corresponding to a zero velocity and that the slopes of the respective plots do not exhibit discontinuities at either T1 betweenFIGS. 4B and 4C or at T2 betweenFIGS. 4C and 4D . -
FIG. 5 is a plot of amechanical displacement estimate 510 versus anoptical displacement estimate 520 of theoptical navigation system 100 as described in various representative embodiments. The ideal case with zero error in both themechanical displacement estimate 510 and theoptical displacement estimate 520 is shown by the solid, straight line rising at 45 degrees to the right inFIG. 5 . Dashed lines parallel to the solid 45 degree line represent anerror range 530 defined by the designer or user as being acceptable. In the example ofFIG. 5 , themechanical displacement estimate 510 is XMECH and theoptical displacement estimate 520 is XOPT which have intersection at point E. Note that in the representative example ofFIG. 5 , point E lies outside theacceptable error range 530. The displacement estimate which the system reports could be adjusted by taking the projection of the intersection of XMECH and the upper error range 45 degree line on the axis ofoptical displacement estimate 520 as an adjusteddisplacement estimate 540 indicated inFIG. 5 as XCORR. - Other algorithms than that discussed for adjusting the optically determined location other than that discussed above are also possible. As an example, XMECH could be used as a maximum for XOPT. Also, a minimum value for XMECH could also be determined for setting a minimum value for XOPT. In addition, while the above example has used constant acceleration and constant declaration, more complex velocity/acceleration profiles could also be used and may, in fact, be dictated by the application. Further, the data may be theoretically or empirically determined. An still further, the motion profile may be accessed via equations, via tables of numbers representing the profile, or via other acceptable means.
-
FIG. 6A is a drawing of a more detailed block diagram of part of the optical navigation system ofFIG. 1 . InFIG. 6A , thenavigation circuit 170 comprises an optical displacement estimationdigital circuit 371, also referred to herein as a firstdigital circuit 371, for determining theoptical displacement estimate 520 between theimage sensor 110 and theobject 130 obtained by comparing theimage 150 captured at a given time to theimage 150 captured at a subsequent time, a mechanical displacement estimationdigital circuit 372, also referred to herein as a seconddigital circuit 372, for determining themechanical displacement estimate 510 obtained from consideration of the mechanical characteristics of theoptical navigation system 100, and a displacement estimation adjustmentdigital circuit 373, also referred to herein as a thirddigital circuit 373, for determining an adjusteddisplacement estimate 540 based on theoptical displacement estimate 520 and themechanical displacement estimate 510. Note that a displacement may or may not occur between the times corresponding to the capture of the twoimages 150. -
FIG. 6B is a drawing of a more detailed block diagram of a part of the optical navigation system ofFIG. 6A . InFIG. 6B , the optical displacement estimationdigital circuit 371 comprises an image shiftdigital circuit 374, also referred to herein as a fourthdigital circuit 374, for performing multiple shifts in one of theimages 150, a shift comparisondigital circuit 375, also referred to herein as a fifthdigital circuit 375, for performing a comparison, which could be a cross-correlation comparison, between one of theother images 150 and the shiftedmultiple images 150, a displacement estimation computationdigital circuit 376, also referred to herein as a sixthdigital circuit 376, for using shift information for the shiftedimage 150, which information could be used in comparisons employing cross-correlations to the oneother image 150, to compute the estimate of the relative displacement between theimage sensor 110 and theobject 130, and an image specificationdigital circuit 377, also referred to herein as a seventhdigital circuit 377, for specifying whichimages 150 to use in determining theoptical displacement estimate 520, i.e. the relative displacement between theimage sensor 110 and theobject 130. - Some integrated circuits, such as the Agilent ADNS-2030 which is used in optical mice, use a technique called “prediction” that reduces the amount of computation needed for cross correlation. In theory, an optical mouse could work by doing every possible cross-correlation of images (i.e., shift of 1 pixel in all directions, shift of 2 pixels in all directions, etc.) for any given pair of images. The problem with this is that as the number of shifts considered increases, the needed computations increase even faster. For example, for a 9×9 pixel optical mouse there are only 9 possible positions considering a maximum shift of 1 pixel (8 shifted by 1 pixel and one for no movement), but there are 25 possible positions for a maximum considered shift of 2 pixels, and so forth. Prediction decreases the amount of computation by pre-shifting one of the images based on an estimated mouse velocity to attempt to overlap the images exactly. Thus, the maximum amount of shift between the two images is smaller because the shift is related to the error in the prediction process rather than the absolute velocity of the mouse. Consequently, less computation is required. See U.S. Pat. No. 6,433,780 by Gordon et al.
-
FIG. 7 is a flow chart of amethod 600 for using theoptical navigation system 100 as described in various representative embodiments. - In
block 610, animage 150 of an area of thesurface 160 of thework piece 130 is captured by theoptical navigation system 100. Thisimage 150 is captured without prior knowledge of whether theoptical navigation system 100 with theprint head 330 is stationary or has been moved to a new location.Block 610 then transfers control to block 620. - In
block 620, an expected new position, the optical displacement estimate XOPT ofFIG. 5 , is obtained by comparingsuccessive images 150 of areas of thepage 130 captured by theimage sensor 110 as described above.Block 620 then transfers control to block 630. - In
block 630, an expected new position based on mechanical profiles of the system, the mechanical displacement estimate XMECH ofFIG. 5 , is obtained by, for example, the techniques described above with respect toFIGS. 4A-4E .Block 630 then transfers control to block 640. - In
block 640, if the theoretical mechanical location XMECH lies outside theacceptable error range 530, block 640 transfers control to block 650. Otherwise, block 640 transfers control to block 660. - In
block 650, the optical displacement estimate XOPT is corrected by knowledge of XMECH to adjusted displacement estimate XCORR.Block 650 then transfers control to block 660. - In
block 660, the new position based on an initial starting location of theoptical navigation system 100 is reported to thecontroller 190. The actual value reported depends upon the path taken to block 660. The reported value is that of the value of unadjusted optical displacement estimate XOPT if control is passedfro block 640 to block 660. Otherwise, the reported value is that of the value of the adjusted optical displacement estimate XOPT.Block 660 then transfers control back to block 610. - Motion computations are performed for an optical navigation system in a way that is altered by knowledge of the movement profile of the device for which motion is being measured. By noting the direction of intended motion and the plausible acceleration and velocity profiles that the object is expected to undergo, it is possible to increase the accuracy of positional determination obtained from the cross-correlation based image motion measurements normally used by an optical mouse.
- The representative embodiments, which have been described in detail herein, have been presented by way of example and not by way of limitation. It will be understood by those skilled in the art that various changes may be made in the form and details of the described embodiments resulting in equivalent embodiments that remain within the scope of the appended claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/083,796 US7119323B1 (en) | 2005-03-18 | 2005-03-18 | Error corrected optical navigation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/083,796 US7119323B1 (en) | 2005-03-18 | 2005-03-18 | Error corrected optical navigation system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20060208155A1 true US20060208155A1 (en) | 2006-09-21 |
US7119323B1 US7119323B1 (en) | 2006-10-10 |
Family
ID=37009338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/083,796 Active 2025-05-18 US7119323B1 (en) | 2005-03-18 | 2005-03-18 | Error corrected optical navigation system |
Country Status (1)
Country | Link |
---|---|
US (1) | US7119323B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060238508A1 (en) * | 2005-04-22 | 2006-10-26 | Tong Xie | Optical location measuring system |
US20140160021A1 (en) * | 2012-12-07 | 2014-06-12 | Wen-Chieh Geoffrey Lee | Optical Mouse with Cursor Rotating Ability |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6950094B2 (en) * | 1998-03-30 | 2005-09-27 | Agilent Technologies, Inc | Seeing eye mouse for a computer system |
JP2006072497A (en) * | 2004-08-31 | 2006-03-16 | Mitsumi Electric Co Ltd | Mouse input device |
US7248345B2 (en) * | 2004-11-12 | 2007-07-24 | Silicon Light Machines Corporation | Signal processing method for use with an optical navigation system |
WO2007030731A2 (en) * | 2005-09-07 | 2007-03-15 | Nr Laboratories, Llc | Positional sensing system and method |
US8471191B2 (en) * | 2005-12-16 | 2013-06-25 | Cypress Semiconductor Corporation | Optical navigation system having a filter-window to seal an enclosure thereof |
US7765251B2 (en) * | 2005-12-16 | 2010-07-27 | Cypress Semiconductor Corporation | Signal averaging circuit and method for sample averaging |
US7884801B1 (en) | 2006-02-16 | 2011-02-08 | Cypress Semiconductor Corporation | Circuit and method for determining motion with redundant comb-arrays |
US7728816B2 (en) * | 2006-07-10 | 2010-06-01 | Cypress Semiconductor Corporation | Optical navigation sensor with variable tracking resolution |
US7742514B1 (en) | 2006-10-31 | 2010-06-22 | Cypress Semiconductor Corporation | Laser navigation sensor |
US8072429B2 (en) | 2006-12-22 | 2011-12-06 | Cypress Semiconductor Corporation | Multi-axial touch-sensor device with multi-touch resolution |
WO2009112895A1 (en) * | 2008-03-10 | 2009-09-17 | Timothy Webster | Position sensing of a piston in a hydraulic cylinder using a photo image sensor |
US8541727B1 (en) | 2008-09-30 | 2013-09-24 | Cypress Semiconductor Corporation | Signal monitoring and control system for an optical navigation sensor |
US7723659B1 (en) | 2008-10-10 | 2010-05-25 | Cypress Semiconductor Corporation | System and method for screening semiconductor lasers |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6433780B1 (en) * | 1995-10-06 | 2002-08-13 | Agilent Technologies, Inc. | Seeing eye mouse for a computer system |
US6525306B1 (en) * | 2000-04-25 | 2003-02-25 | Hewlett-Packard Company | Computer mouse with integral digital camera and method for using the same |
-
2005
- 2005-03-18 US US11/083,796 patent/US7119323B1/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6433780B1 (en) * | 1995-10-06 | 2002-08-13 | Agilent Technologies, Inc. | Seeing eye mouse for a computer system |
US6525306B1 (en) * | 2000-04-25 | 2003-02-25 | Hewlett-Packard Company | Computer mouse with integral digital camera and method for using the same |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060238508A1 (en) * | 2005-04-22 | 2006-10-26 | Tong Xie | Optical location measuring system |
US20140160021A1 (en) * | 2012-12-07 | 2014-06-12 | Wen-Chieh Geoffrey Lee | Optical Mouse with Cursor Rotating Ability |
TWI562022B (en) * | 2012-12-07 | 2016-12-11 | Wen Chieh Geoffrey Lee | Optical mouse with cursor rotating ability, the motion detector and method thereof |
US9733727B2 (en) * | 2012-12-07 | 2017-08-15 | Wen-Chieh Geoffrey Lee | Optical mouse with cursor rotating ability |
Also Published As
Publication number | Publication date |
---|---|
US7119323B1 (en) | 2006-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7119323B1 (en) | Error corrected optical navigation system | |
US20060209015A1 (en) | Optical navigation system | |
US5477237A (en) | Positioning device reporting X, Y and yaw motion | |
CN107424186B (en) | Depth information measuring method and device | |
CN107000967B (en) | Position determining system for elevator | |
JP4927532B2 (en) | Method and apparatus for an absolute optical encoder that is less susceptible to scale or disc mounting errors | |
EP1586857B1 (en) | An optical device that measures distance between the device and a surface | |
JP5659220B2 (en) | Position encoder device | |
JP2006527355A (en) | Position detection method using optical sensor and apparatus using the method | |
US20140022352A1 (en) | Motion blur compensation | |
US8687060B1 (en) | System and method for providing distance-based pulses relative to motion of a surface scanned by a vision system | |
CN103988049A (en) | Coordinate measuring machine having camera | |
US20060171725A1 (en) | Optical mouse sensor for monitoring motion of a sheet | |
US6907672B2 (en) | System and method for measuring three-dimensional objects using displacements of elongate measuring members | |
EP1182606A2 (en) | Four axis optical mouse | |
CN109696191B (en) | Movement delay measurement method of virtual reality head-mounted display equipment | |
US6297513B1 (en) | Exposure servo for optical navigation over micro-textured surfaces | |
US6847353B1 (en) | Multiple sensor device and method | |
US20100199475A1 (en) | System and method for utilizing a linear sensor | |
JP2013195287A (en) | Displacement detection device, and electronic equipment | |
JPH024843B2 (en) | ||
TWI436254B (en) | Optical touch display system | |
CN109696189B (en) | Rotation delay measuring method of VR helmet based on encoder | |
JP5213607B2 (en) | Substrate surface displacement measuring device | |
US6846058B2 (en) | Media positioning with differently accurate sensors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DUPLESSIS, JEAN-PIERRE;GATTA, SRINIVAS RAGHU;HANLON, WILLIAM N.;REEL/FRAME:016075/0503 Effective date: 20050321 |
|
AS | Assignment |
Owner name: AGILENT TECHNOLOGIES, INC, COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BROSNAN, MICHAEL J.;XIE, TONG;REEL/FRAME:016288/0567 Effective date: 20050301 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP PTE. LTD.,SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AGILENT TECHNOLOGIES, INC.;REEL/FRAME:017206/0666 Effective date: 20051201 Owner name: AVAGO TECHNOLOGIES GENERAL IP PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AGILENT TECHNOLOGIES, INC.;REEL/FRAME:017206/0666 Effective date: 20051201 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES ECBU IP (SINGAPORE) PTE. LTD.,S Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:017675/0518 Effective date: 20060127 Owner name: AVAGO TECHNOLOGIES ECBU IP (SINGAPORE) PTE. LTD., Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:017675/0518 Effective date: 20060127 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES ECBU IP (SINGAPORE) PTE. LTD.;REEL/FRAME:030369/0528 Effective date: 20121030 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT, NEW YORK Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:032851/0001 Effective date: 20140506 Owner name: DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AG Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:032851/0001 Effective date: 20140506 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS (RELEASES RF 032851-0001);ASSIGNOR:DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT;REEL/FRAME:037689/0001 Effective date: 20160201 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS (RELEASES RF 032851-0001);ASSIGNOR:DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT;REEL/FRAME:037689/0001 Effective date: 20160201 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001 Effective date: 20160201 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME PREVIOUSLY RECORDED AT REEL: 017206 FRAME: 0666. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:AGILENT TECHNOLOGIES, INC.;REEL/FRAME:038632/0662 Effective date: 20051201 |
|
AS | Assignment |
Owner name: PIXART IMAGING INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:039788/0572 Effective date: 20160805 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:039862/0129 Effective date: 20160826 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:039862/0129 Effective date: 20160826 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001 Effective date: 20170119 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001 Effective date: 20170119 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553) Year of fee payment: 12 |