WO2008095227A1 - System for controlling movement of a cursor on a display device - Google Patents

System for controlling movement of a cursor on a display device Download PDF

Info

Publication number
WO2008095227A1
WO2008095227A1 PCT/AU2008/000047 AU2008000047W WO2008095227A1 WO 2008095227 A1 WO2008095227 A1 WO 2008095227A1 AU 2008000047 W AU2008000047 W AU 2008000047W WO 2008095227 A1 WO2008095227 A1 WO 2008095227A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensing device
cursor
data
substrate
relative
Prior art date
Application number
PCT/AU2008/000047
Other languages
French (fr)
Inventor
Andrew Timothy Robert Newman
Paul Lapstun
Kia Silverbrook
Original Assignee
Silverbrook Research Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silverbrook Research Pty Ltd filed Critical Silverbrook Research Pty Ltd
Publication of WO2008095227A1 publication Critical patent/WO2008095227A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0317Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface
    • G06F3/0321Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface by optically sensing the absolute position with respect to a regularly patterned surface forming a passive digitiser, e.g. pen optically detecting position indicative tags printed on a paper sheet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/033Indexing scheme relating to G06F3/033
    • G06F2203/0337Status LEDs integrated in the mouse to provide visual feedback to the user about the status of the input device, the PC, or the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/038Indexing scheme relating to G06F3/038
    • G06F2203/0384Wireless input, i.e. hardware and software details of wireless interface arrangements for pointing devices

Definitions

  • the present invention relates to a method and system for reading a position- coding pattern disposed on a surface. It has been developed primarily to improve the functionality of sensing device used for reading the position-coding pattern.
  • the Applicant has previously described a method of enabling users to access information from a computer system via a printed substrate e.g. paper.
  • the substrate has coded data printed thereon, which is read by an optical sensing device when the user interacts with the substrate using the sensing device.
  • a computer receives interaction data from the sensing device and uses this data to determine what action is being requested by the user. For example, a user may make make handwritten input onto a form or make a selection gesture around a printed item. This input is interpreted by the computer system with reference to a page description corresponding to the printed substrate.
  • the present invention provides a system for controlling movement of a cursor on a display device, the system comprising: a substrate having a position-coding pattern disposed on or in a surface thereof; a sensing device comprising: an image sensor for optically imaging the position-coding pattern; and a processor configured for: generating absolute motion data by determining a plurality of absolute positions of the sensing device relative to the surface using the imaged position-coding pattern; generating orientation data indicative of an orientation of the sensing device relative to the substrate; and using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of relative motion of the sensing device from the perspective of a user; and communication means for communicating the relative motion data to a computer system; and the computer system configured for: receiving said relative motion data from the sensing device; interpreting said relative motion data as cursor movement; and generating cursor control commands for said display device.
  • the relative motion data is indicative of relative position changes of the sensing device substantially from a perspective of a user, and irrespective of an orientation of said substrate.
  • the position-coding pattern comprises a plurality of tags, each tag identifying a location on the surface and a rotational orientation of the tag relative to the substrate, thereby enabling a yaw of the sensing device relative to the substrate to be determined.
  • said display device is selected from at least one of: a display device associated with the computer system a display device integral with the computer system; and a display device remote from the computer system.
  • said sensing device is operable in a plurality of modes, said plurality including a cursor mode and at least one other mode, and wherein the computer system is further configured for: determining that said sensing device is operating in a cursor mode.
  • said at least one other mode is selected from the group comprising: a scroll mode; a hyperlinking mode; a searching mode; a content-extraction mode; and a handwriting mode.
  • said sensing device comprises a mode selector, and said interaction data comprises mode data indicative of said cursor mode.
  • said mode selector comprises at least one of: one or more mode buttons operable by a user; and a sensor for detecting a force exerted by said sensing device on said surface.
  • said computer system is configured for retrieving stored mode data indicative of a most recent mode selected for said sensing device.
  • said computer system is further configured for: determining if said sensing device is positioned within a cursor zone of said substrate, said cursor zone being activated by determination of said cursor mode; and interpreting relative motion of said sensing device only within said cursor zone as said cursor movement.
  • said computer system is further configured for: determining if said sensing device is positioned within a scroll zone of said substrate, said scroll zone being activated by determination of said cursor mode; and interpreting the interaction of said sensing device within said scroll zone as a scroll action; scrolling a page displayed on said display device according to said scroll action.
  • said computer system is configured for at least one of: interpreting at least one absolute position of said sensing device within said scroll zone to be indicative of a scroll direction; and interpreting relative motion of said sensing device within said scroll zone to be indicative of a scroll direction.
  • the position-coding pattern is further indicative of an identity of the substrate and the interaction data comprises substrate identity data.
  • the substrate is a cursor control substrate and said computer system is configured for using the substrate identity data to retrieve a cursor page description corresponding to said cursor control substrate, said cursor page description comprising a cursor zone within which the interaction of said sensing device is interpreted as said cursor movement.
  • said cursor page description comprises a scroll zone within which the interaction of said sensing device is interpreted as a scroll action, and wherein said computer system is configured to scroll a page displayed on said display device according to said scroll action.
  • said cursor control substrate has visible markings indicating at least one of: said cursor zone, said scroll zone and a scroll direction.
  • said scroll zone is located at an edge region of said substrate.
  • the present invention provides a method of controlling movement of a cursor on a display device via a substrate having a position-coding pattern disposed on or in a surface thereof, said method comprising the steps of: receiving, in a computer system, interaction data indicative of an interaction of the sensing device with the substrate, said interaction data comprising: absolute motion data indicative of a plurality of absolute positions of the sensing device relative to the surface; and orientation data indicative of an orientation of the sensing device relative to the substrate; and using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of position changes of the sensing device relative to itself; interpreting said relative motion data as cursor movement; and generating cursor control commands for said display device.
  • the present invention provides a sensing device for controlling movement of a cursor on a display device, said sensing device comprising: an image sensor for optically imaging a position-coding pattern disposed on or in a surface; and a processor configured for: generating absolute motion data by determining a plurality of absolute positions of the sensing device relative to the surface using the imaged position- coding pattern; generating orientation data indicative of an orientation of the sensing device relative to the substrate; and using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of relative motion of the sensing device from the perspective of a user; and communication means for communicating the relative motion data to a computer system, thereby enabling the computer system to generate cursor control commands using the relative motion data for controlling movement of the cursor on the display device.
  • the present invention provides a computer system for controlling movement of a cursor on a display device via a substrate having a position-coding pattern disposed on or in a surface thereof, said computer system being configured for: receiving interaction data indicative of an interaction of the sensing device with the substrate, said interaction data comprising: absolute motion data indicative of a plurality of absolute positions of the sensing device relative to the surface; and orientation data indicative of an orientation of the sensing device relative to the substrate; using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of position changes of the sensing device relative to itself; interpreting said relative motion data as cursor movement; and generating cursor control commands for said display device.
  • the present invention provides a sensing device for interaction with a surface, said sensing device having automatic mode selection, said sensing device comprising: an image sensor for imaging the surface and generating image data; a motion sensor configured for determining one or more relative position changes of the sensing device; a processor configured for: receiving the image data; and automatically selecting, using said image data, either an interaction mode or a cursor mode for said sensing device; and communication means for transmitting either interaction data or cursor data to a computer system, dependent on said selected mode, wherein said processor is configured to: select the interaction mode and generate interaction data from the image data if said image data indicates that said sensing device is interacting with a first surface having a position-coding pattern disposed thereon, said interaction data being indicative of at least one absolute location of the sensing device relative to the surface; and select the cursor mode if said image data indicates that said sensing device is interacting with a second surface lacking a position-coding pattern, said cursor data being indicative of said one or more relative position changes of the sensing device
  • the motion sensor is selected from any one of the group comprising: at least one accelerometer; a mechanical mouse; an optical mouse; and a point interferometry device.
  • said motion sensor is an optical mouse utilizing at least one of: a pattern-based optical mouse technique; a texture-based optical mouse technique; and a laser-speckle- based optical mouse technique.
  • the position-coding pattern of the first surface is indicative of a plurality of locations on the surface and of an identity of a region.
  • said processor is configured for determining the identity of the region using the imaged position-coding pattern, and said interaction data is further indicative of the identity of the region.
  • the identity of the region is coincident with an identity of the surface.
  • the position-coding pattern is comprised of a plurality of tags, each tag identifying the identity of the surface and a location of the tag on the surface.
  • the present invention provides a system for initiating an action corresponding to interaction of a sensing device relative to a surface, said system comprising: (A) the sensing device comprising: an image sensor for imaging the surface and generating image data; a motion sensor configured for determining one or more relative position changes of the sensing device; a processor configured for: receiving the image data; and automatically selecting, using said image data, either an interaction mode or a cursor mode for said sensing device; and communication means for transmitting either interaction data or cursor data to a computer system, dependent on said selected mode, wherein said processor is configured to: select the interaction mode and generate interaction data if said image data indicates that said sensing device is interacting with a first surface having a position- coding pattern disposed thereon, said interaction data being indicative of an absolute location of the sensing device relative to the surface; and select the cursor mode if said image data indicates that said sensing device is interacting with a second surface lacking a position-coding pattern, said cursor data being indicative of said one or more relative position
  • said action initiated by said interaction data is selected from at least one of: hyperlinking; form- filling; searching; and content-extraction.
  • the position-coding pattern of the first surface is indicative of a plurality of locations on the surface and of an identity of a region.
  • said processor is configured for determining the identity of the region using the imaged position-coding pattern, and said interaction data is further indicative of the identity of the region.
  • said computer system is configured to interpret said interaction data by the steps of: identifying and retrieving a page description corresponding to the first surface using the identity of the region; determining a request using the retrieved page description and the interaction data; and initiating an action based on said request.
  • the identity of the region is coincident with an identity of the surface.
  • the position-coding pattern is comprised of a plurality of tags, each tag identifying the identity of the surface and a location of the tag on the surface.
  • display device is selected from at least one of: a display device associated with the computer system a display device integral with the computer system; and a display device remote from the computer system.
  • the present invention provides a method of automatically selecting a mode of a sensing device interacting with a surface, said sensing device comprising a motion sensor configured for determining one or more relative position changes of the sensing device, said method comprising the steps of: imaging the surface and generating image data; automatically selecting, using said image data, either an interaction mode or a cursor mode for said sensing device; and transmitting either interaction data or cursor data to a computer system, dependent on said selected mode, wherein: the interaction mode is selected and the interaction data is generated from the image data if said image data indicates that said sensing device is interacting with a first surface having a position-coding pattern disposed thereon, said interaction data being indicative of at least one absolute location of the sensing device relative to the surface; and the cursor mode is selected and one or more relative position changes of the sensing device are determined if said image data indicates that said sensing device is interacting with a second surface lacking a position-coding pattern, said cursor data being indicative of said one or more relative position changes of the sens
  • the method further comprising the steps of: receiving the interaction data in the computer system; and interpreting said interaction data to initiate an action corresponding to said interaction with said surface.
  • the method further comprising the steps of: receiving the cursor data from the sensing device; interpreting said cursor data to control movement of a cursor on a display device.
  • the present invention provides a system for enabling scrolling of a page displayed on a display device, the system comprising: a substrate having a position-coding pattern disposed on or in a surface thereof; a sensing device operable in a plurality of modes including a cursor mode, said sensing device comprising: an image sensor for optically imaging the position-coding pattern; and a processor configured for generating interaction data indicative of an interaction of the sensing device with the surface, said interaction data being indicative of a position or movement of the sensing device relative to the surface; communication means for communicating the interaction data to a computer system; and the computer system configured for: receiving the interaction data from the sensing device; determining that said sensing device is operating in a cursor mode; determining a scroll zone for said substrate; determining if said sensing device is positioned within said scroll zone; interpreting said position or movement of said sensing device within said scroll zone as a scrolling action; and generating a scroll control command for said display device so as to scroll said displayed page.
  • the sensing device is operable in two or more modes selected from the group comprising: said cursor mode; a hyperlinking mode; a searching mode; a content- extraction mode; and a handwriting mode.
  • said sensing device comprises a mode selector
  • said interaction data comprises mode data indicative of said cursor mode.
  • said mode selector comprises at least one of: one or more mode buttons operable by a user; and a sensor for detecting a force exerted by said sensing device on said surface.
  • said computer system is configured for retrieving stored mode data indicative of a most recent mode selected for said sensing device.
  • said computer system is configured for at least one of: interpreting at least one absolute position of said sensing device within said scroll zone to be indicative of a scroll direction; and interpreting movement of said sensing device within said scroll zone to be indicative of a scroll direction.
  • said scroll direction is selected from at least one of: vertical scrolling; horizontal scrolling; and diagonal scrolling.
  • said scroll zone is located at an edge region of said substrate.
  • said substrate comprises a plurality of scroll zones.
  • said substrate comprises visible markings indicating at least one of: said scroll zone and a scroll direction.
  • said computer system is further configured for: determining if said sensing device is positioned within a cursor zone of said substrate; interpreting movement of said sensing device as a cursor movement; and generating cursor control commands for said display device.
  • said display device is selected from at least one of: a display device associated with the computer system a display device integral with the computer system; and a display device remote from the computer system.
  • system further comprising the display device.
  • the position-coding pattern is further indicative of an identity of the substrate and the interaction data comprises substrate identity data.
  • said computer system is configured for retrieving a page description corresponding to said substrate using said substrate identity.
  • said computer is configured for retrieving said page description if it is determined that said sensing device is not operating in said cursor mode.
  • said computer system is configured for: using said position or movement of said sensing device together with said retrieved page description to interpret said interaction of said sensing device with said substrate; and initiate an action corresponding to said interaction.
  • the present invention provides a method of enabling scrolling of a page displayed on a display device via a substrate having position-coding pattern disposed in or on a surface thereof, said method comprising, in a computer system, the steps of: receiving interaction data indicative of an interaction of the sensing device with the substrate, said interaction data being indicative of a position or movement of the sensing device relative to the surface; determining that said sensing device is operating in a cursor mode; determining a scroll zone for said substrate; determining if said sensing device is positioned within a scroll zone of said substrate; interpreting said position or movement of said sensing device within said scroll zone as a scrolling action; and generating a scroll control command for said display device so as to scroll said displayed page.
  • the present invention provides a computer system for controlling scrolling of a page displayed on a display device, said computer system being configured for: receiving interaction data indicative of an interaction of the sensing device with a substrate having a position-coding pattern disposed on or in a surface thereof, said interaction data being indicative of a position or movement of the sensing device relative to the surface; determining that said sensing device is operating in a cursor mode; determining a scroll zone for said substrate; determining if said sensing device is positioned within a scroll zone of said substrate; interpreting said position or movement of said sensing device within said scroll zone as a scrolling action; and generating a scroll control command for said display device so as to scroll said displayed page.
  • the present invention provides a system for enabling user input and control of a cursor on a display device, the system comprising: a substrate having a position-coding pattern disposed on or in a surface thereof, said substrate having at least one input element and a discrete cursor zone; a sensing device comprising: an image sensor for optically imaging the position-coding pattern; and a processor configured for generating interaction data indicative of an interaction of the sensing device with the surface, said interaction data being indicative of a position or movement of the sensing device relative to the surface; communication means for communicating the interaction data to a computer system; and the computer system configured for: receiving the interaction data from the sensing device; retrieving a page description corresponding to said substrate; determining whether said sensing device is positioned within said cursor zone; interpreting movement of said sensing device within said cursor zone as a cursor movement and generating corresponding cursor control commands for said display device; and otherwise determining if said position or movement of said sensing device is within a zone of said at least one input element
  • said at least one user input element is a GUI control button and said action is a corresponding GUI control action.
  • said GUI control action is selected from the group comprising: scrolling; web browser control; page up; page down; cut; copy; paste; tab between GUI applications; launching of a GUI application; volume control; log off; sleep; and keyboard input.
  • said substrate is an explicitly dedicated GUI control substrate, said substrate comprising visible markings indicating said cursor zone and said at least one GUI control button.
  • said at least one input element is a hyperlink element, and said action is hyperlinking.
  • said substrate comprises a scroll zone
  • said computer system is configured for interpreting a position or movement of said sensing device within said scroll zone to be indicative of a scrolling action.
  • said computer system is configured for at least one of: interpreting at least one absolute position of said sensing device within said scroll zone to be indicative of a scroll direction; and interpreting movement of said sensing device within said scroll zone to be indicative of a scroll direction.
  • said scroll direction is selected from at least one of: vertical scrolling; horizontal scrolling; and diagonal scrolling.
  • said substrate comprises a plurality of scroll zones.
  • said substrate comprises visible markings indicating at least one of: said scroll zone and a scroll direction.
  • said computer system is configured for interpreting movement of said sensing device within said cursor zone as relative movement.
  • said computer system is configured for interpreting the position or movement of said sensing device outside said cursor zone as an absolute position or movement relative to the surface.
  • said display device is selected from at least one of: a display device associated with the computer system a display device integral with the computer system; and a display device remote from the computer system.
  • system further comprising the display device.
  • the position-coding pattern is further indicative of an identity of the substrate and the interaction data comprises substrate identity data.
  • said computer system is configured for retrieving the page description corresponding to said substrate using said substrate identity.
  • the present invention provides a method of enabling user input and control of a cursor on a display device via a substrate having position-coding pattern disposed in or on a surface thereof, said substrate having at least one input element and a discrete cursor zone, said method comprising, in a computer system, the steps of: receiving interaction data indicative of an interaction of the sensing device with the substrate, said interaction data being indicative of a position or movement of the sensing device relative to the surface; retrieving a page description corresponding to said substrate; determining whether said sensing device is positioned within said cursor zone; interpreting movement of said sensing device within said cursor zone as a cursor movement and generating corresponding cursor control commands for said display device; and otherwise determining if said position or movement of said sensing device is within a zone of said at least one input element and initiating an action corresponding to said at least one input element.
  • the present invention provides a computer system for enabling user input and control of a cursor on a display device via a substrate having position-coding pattern disposed in or on a surface thereof, said substrate comprising at least one input element and a discrete cursor zone, said computer system being configured for: receiving interaction data indicative of an interaction of the sensing device with the substrate, said interaction data being indicative of a position or movement of the sensing device relative to the surface; retrieving a page description corresponding to said substrate; determining whether said sensing device is positioned within said cursor zone; interpreting movement of said sensing device within said cursor zone as a cursor movement and generating corresponding cursor control commands for said display device; and otherwise determining if said position or movement of said sensing device is within a zone of said at least one input element and initiating an action corresponding to said at least one input element.
  • Figure 1 shows an embodiment of basic netpage architecture
  • Figure 2 is a schematic of a the relationship between a sample printed netpage and its online page description
  • Figure 3 shows an embodiment of basic netpage architecture with various alternatives for the relay device;
  • Figure 3A illustrates a collection of netpage servers, Web terminals, printers and relays interconnected via a network;
  • Figure 4 is a schematic view of a high-level structure of a printed netpage and its online page description
  • Figure 5 A is a plan view showing a structure of a netpage tag
  • Figure 5B is a plan view showing a relationship between a set of the tags shown in Figure 5a and a field of view of a netpage sensing device in the form of a netpage pen
  • Figure 6A is a plan view showing an alternative structure of a netpage tag
  • Figure 6B is a plan view showing a relationship between a set of the tags shown in Figure
  • Figure 6C is a plan view showing an arrangement of nine of the tags shown in Figure 6a where targets are shared between adjacent tags;
  • Figure 6D is a plan view showing the interleaving and rotation of the symbols of the four codewords of the tag shown in Figure 6a;
  • FIG. 7 is a flowchart of a tag image processing and decoding algorithm
  • Figure 8 is a perspective view of a netpage pen and its associated tag-sensing field-of-view cone;
  • Figure 9 is a perspective exploded view of the netpage pen shown in Figure 8;
  • Figure 10 is a schematic block diagram of a pen controller for the netpage pen shown in
  • Figure 11 is a schematic view of a pen class diagram
  • Figure 12 is a schematic view of a document and page description class diagram
  • Figure 13 is a schematic view of a document and page ownership class diagram
  • Figure 14 is a schematic view of a terminal element specialization class diagram
  • Figure 15 shows cursor control and scroll functions mapped onto an arbitrary page
  • Figure 16 shows an explicit cursor control and scroll page
  • Figure 17 shows an explicit cursor control, scroll and keyboard page.
  • MemjetTM is a trade mark of Silverbrook Research Pty Ltd, Australia.
  • the invention is configured to work with the netpage networked computer system, a detailed overview of which follows. It will be appreciated that not every implementation will necessarily embody all or even most of the specific details and extensions discussed below in relation to the basic system. However, the system is described in its most complete form to reduce the need for external reference when attempting to understand the context in which the preferred embodiments and aspects of the present invention operate.
  • the preferred form of the netpage system employs a computer interface in the form of a mapped surface, that is, a physical surface which contains references to a map of the surface maintained in a computer system. The map references can be queried by an appropriate sensing device.
  • the map references may be encoded visibly or invisibly, and defined in such a way that a local query on the mapped surface yields an unambiguous map reference both within the map and among different maps.
  • the computer system can contain information about features on the mapped surface, and such information can be retrieved based on map references supplied by a sensing device used with the mapped surface. The information thus retrieved can take the form of actions which are initiated by the computer system on behalf of the operator in response to the operator ' s interaction with the surface features.
  • the netpage system relies on the production of, and human interaction with, netpages. These are pages of text, graphics and images printed on ordinary paper, but which work like interactive webpages. Information is encoded on each page using ink which is substantially invisible to the unaided human eye.
  • the ink can be sensed by an optically imaging sensing device and transmitted to the netpage system.
  • the sensing device may take the form of a clicker (for clicking on a specific position on a surface), a pointer having a stylus (for pointing or gesturing on a surface using pointer strokes), or a pen having a marking nib (for marking a surface with ink when pointing, gesturing or writing on the surface).
  • pen or “netpage pen” are provided by way of example only. It will, of course, be appreciated that the pen may take the form of any of the sensing devices described above.
  • active buttons and hyperlinks on each page can be clicked with the sensing device to request information from the network or to signal preferences to a network server.
  • text written by hand on a netpage is automatically recognized and converted to computer text in the netpage system, allowing forms to be filled in.
  • signatures recorded on a netpage are automatically verified, allowing e-commerce transactions to be securely authorized.
  • text on a netpage may be clicked or gestured to initiate a search based on keywords indicated by the user.
  • a printed netpage 1 can represent a interactive form which can be filled in by the user both physically, on the printed page, and "electronically", via communication between the pen and the netpage system.
  • the example shows a "Request” form containing name and address fields and a submit button.
  • the netpage consists of graphic data 2 printed using visible ink, and coded data 3 printed as a collection of tags 4 using invisible ink.
  • the corresponding page description 5, stored on the netpage network describes the individual elements of the netpage. In particular it describes the type and spatial extent (zone) of each interactive element (i.e. text field or button in the example), to allow the netpage system to correctly interpret input via the netpage.
  • the submit button 6, for example has a zone 7 which corresponds to the spatial extent of the corresponding graphic 8.
  • a netpage sensing device 101 works in conjunction with a netpage relay device 601, which is an Internet-connected device for home, office or mobile use.
  • the pen is wireless and communicates securely with the netpage relay device 601 via a short-range radio link 9.
  • the netpage pen 101 utilises a wired connection, such as a USB or other serial connection, to the relay device 601.
  • the relay device 601 performs the basic function of relaying interaction data to a page server 10, which interprets the interaction data.
  • the relay device 601 may, for example, take the form of a personal computer 601a, a netpage printer 601b or some other relay 601c.
  • the netpage printer 601b is able to deliver, periodically or on demand, personalized newspapers, magazines, catalogs, brochures and other publications, all printed at high quality as interactive netpages.
  • the netpage printer is an appliance which can be, for example, wall-mounted adjacent to an area where the morning news is first consumed, such as in a user's kitchen, near a breakfast table, or near the household's point of departure for the day. It also comes in tabletop, desktop, portable and miniature versions. Netpages printed on-demand at their point of consumption combine the ease-of-use of paper with the timeliness and interactivity of an interactive medium.
  • the netpage relay device 601 may be a portable device, such as a mobile phone or PDA, a laptop or desktop computer, or an information appliance connected to a shared display, such as a TV. If the relay device 601 is not a netpage printer 601b which prints netpages digitally and on demand, the netpages may be printed by traditional analog printing presses, using such techniques as offset lithography, flexography, screen printing, relief printing and rotogravure, as well as by digital printing presses, using techniques such as drop-on-demand inkjet, continuous inkjet, dye transfer, and laser printing.
  • traditional analog printing presses using such techniques as offset lithography, flexography, screen printing, relief printing and rotogravure, as well as by digital printing presses, using techniques such as drop-on-demand inkjet, continuous inkjet, dye transfer, and laser printing.
  • the netpage sensing device 101 interacts with the coded data on a printed netpage 1, or other printed substate such as a label of a product item 251, and communicates, via a short-range radio link 9, the interaction to the relay 601.
  • the relay 601 sends corresponding interaction data to the relevant netpage page server 10 for interpretation.
  • Raw data received from the sensing device 101 may be relayed directly to the page server 10 as interaction data.
  • the interaction data may be encoded in the form of an interaction URI and transmitted to the page server 10 via a user's web browser.
  • the relay device 601 e.g. mobile phone
  • Interpretation of the interaction data by the page server 10 may result in direct access to information requested by the user.
  • This information may be sent from the page server 10 to, for example, a user's display device (e.g. a display device associated with the relay device 601).
  • the information sent to the user may be in the form of a webpage constructed by the page server 10 and the webpage may be constructed using information from external web services 200 (e.g. search engines) or local web resources accessible by the page server 10.
  • the page server 10 may access application computer software running on a netpage application server 13.
  • a two-step information retrieval process may be employed. Interaction data is sent from the sensing device 101 to the relay device 601 in the usual way. The relay device 601 then sends the interaction data to the page server 10 for interpretation with reference to the relevant page description 5. Then, the page server 10 forms a request (typically in the form of a request URI) and sends this request URI back to the user's relay device 601. A web browser running on the relay device 601 then sends the request URI to a netpage web server 201, which interprets the request. The netpage web server 201 may interact with local web resources and external web services 200 to interpret the request and construct a webpage.
  • a request URI typically in the form of a request URI
  • a web browser running on the relay device 601 then sends the request URI to a netpage web server 201, which interprets the request.
  • the netpage web server 201 may interact with local web resources and external web services 200 to interpret the request and construct a webpage.
  • the webpage Once the webpage has been constructed by the netpage web server 201, it is transmitted to the web browser running on the user's relay device 601, which typically displays the webpage.
  • This system architecture is particulary useful for performing searching via netpages, as described in our earlier US Patent Application No. 11/672,950 filed on February 8, 2007 (the contents of which is incorporated by reference).
  • the request URI may encode search query terms, which are searched via the netpage web server 201.
  • the netpage relay device 601 can be configured to support any number of sensing devices, and a sensing device can work with any number of netpage relays.
  • each netpage sensing device 101 has a unique identifier. This allows each user to maintain a distinct profile with respect to a netpage page server 10 or application server 13.
  • Digital, on-demand delivery of netpages 1 may be performed by the netpage printer 601b, which exploits the growing availability of broadband Internet access.
  • Netpage publication servers 14 on the netpage network are configured to deliver print- quality publications to netpage printers. Periodical publications are delivered automatically to subscribing netpage printers via pointcasting and multicasting Internet protocols. Personalized publications are filtered and formatted according to individual user profiles.
  • a netpage pen may be registered with a netpage registration server 11 and linked to one or more payment card accounts. This allows e-commerce payments to be securely authorized using the netpage pen.
  • the netpage registration server compares the signature captured by the netpage pen with a previously registered signature, allowing it to authenticate the user's identity to an e-commerce server. Other biometrics can also be used to verify identity.
  • One version of the netpage pen includes fingerprint scanning, verified in a similar way by the netpage registration server.
  • UML UML class diagram.
  • a class diagram consists of a set of object classes connected by relationships, and two kinds of relationships are of interest here: associations and generalizations.
  • An association represents some kind of relationship between objects, i.e. between instances of classes.
  • a generalization relates actual classes, and can be understood in the following way: if a class is thought of as the set of all objects of that class, and class A is a generalization of class B, then B is simply a subset of A.
  • the UML does not directly support second-order modelling - i.e. classes of classes.
  • Each class is drawn as a rectangle labelled with the name of the class. It contains a list of the attributes of the class, separated from the name by a horizontal line, and a list of the operations of the class, separated from the attribute list by a horizontal line. In the class diagrams which follow, however, operations are never modelled.
  • An association is drawn as a line joining two classes, optionally labelled at either end with the multiplicity of the association.
  • the default multiplicity is one.
  • An asterisk (*) indicates a multiplicity of "many", i.e. zero or more.
  • Each association is optionally labelled with its name, and is also optionally labelled at either end with the role of the corresponding class.
  • An open diamond indicates an aggregation association ("is-part-of '), and is drawn at the aggregator end of the association line.
  • a generalization relationship (“is-a") is drawn as a solid line joining two classes, with an arrow (in the form of an open triangle) at the generalization end.
  • Netpages are the foundation on which a netpage network is built. They provide a paper-based user interface to published information and interactive services.
  • a netpage consists of a printed page (or other surface region) invisibly tagged with references to an online description of the page.
  • the online page description is maintained persistently by the netpage page server 10.
  • the page description describes the visible layout and content of the page, including text, graphics and images. It also describes the input elements on the page, including buttons, hyperlinks, and input fields.
  • a netpage allows markings made with a netpage pen on its surface to be simultaneously captured and processed by the netpage system.
  • each netpage may be assigned a unique page identifier. This page ID has sufficient precision to distinguish between a very large number of netpages.
  • Each reference to the page description is encoded in a printed tag. The tag identifies the unique page on which it appears, and thereby indirectly identifies the page description. The tag also identifies its own position on the page. Characteristics of the tags are described in more detail below.
  • Tags are typically printed in infrared-absorptive ink on any substrate which is infrared-reflective, such as ordinary paper, or in infrared fluorescing ink. Near-infrared wavelengths are invisible to the human eye but are easily sensed by a solid-state image sensor with an appropriate filter.
  • a tag is sensed by a 2D area image sensor in the netpage sensing device, and the tag data is transmitted to the netpage system via the nearest netpage relay device.
  • the pen is wireless and communicates with the netpage relay device via a short-range radio link.
  • Tags are sufficiently small and densely arranged that the sensing device can reliably image at least one tag even on a single click on the page. It is important that the pen recognize the page ID and position on every interaction with the page, since the interaction is stateless. Tags are error-correctably encoded to make them partially tolerant to surface damage.
  • the netpage page server 10 maintains a unique page instance for each unique printed netpage, allowing it to maintain a distinct set of user-supplied values for input fields in the page description for each printed netpage.
  • the relationship between the page description, the page instance, and the printed netpage is shown in Figure 4.
  • the printed netpage may be part of a printed netpage document 45.
  • the page instance may be associated with both the netpage printer which printed it and, if known, the netpage user who requested it.
  • each tag identifies the region in which it appears, and the location of that tag within the region and an orientation of the tag relative to a substrate on which the tag is printed.
  • a tag may also contain flags which relate to the region as a whole or to the tag.
  • One or more flag bits may, for example, signal a tag sensing device to provide feedback indicative of a function associated with the immediate area of the tag, without the sensing device having to refer to a description of the region.
  • a netpage pen may, for example, illuminate an "active area" LED when in the zone of a hyperlink.
  • each tag typically contains an easily recognized invariant structure which aids initial detection, and which assists in minimizing the effect of any warp induced by the surface or by the sensing process.
  • the tags preferably tile the entire page, and are sufficiently small and densely arranged that the pen can reliably image at least one tag even on a single click on the page. It is important that the pen recognize the page ID and position on every interaction with the page, since the interaction is stateless.
  • the region to which a tag refers coincides with an entire page, and the region ID encoded in the tag is therefore synonymous with the page ID of the page on which the tag appears.
  • the region to which a tag refers can be an arbitrary subregion of a page or other surface. For example, it can coincide with the zone of an interactive element, in which case the region ID can directly identify the interactive element.
  • Each tag contains 120 bits of information, typically allocated as shown in Table 1. Assuming a maximum tag density of 64 per square inch, a 16-bit tag ID supports a region size of up to 1024 square inches. Larger regions can be mapped continuously without increasing the tag ID precision simply by using abutting regions and maps. The 100-bit region ID allows 2 100 ( ⁇ 10 30 or a million trillion trillion) different regions to be uniquely identified. 2.2 Tag Data Encoding
  • the 120 bits of tag data are redundantly encoded using a (15, 5) Reed-Solomon code. This yields 360 encoded bits consisting of 6 codewords of 15 4-bit symbols each.
  • the (15, 5) code allows up to 5 symbol errors to be corrected per codeword, i.e. it is tolerant of a symbol error rate of up to 33% per codeword.
  • Each 4-bit symbol is represented in a spatially coherent way in the tag, and the symbols of the six codewords are interleaved spatially within the tag. This ensures that a burst error (an error affecting multiple spatially adjacent bits) damages a minimum number of symbols overall and a minimum number of symbols in any one codeword, thus maximising the likelihood that the burst error can be fully corrected.
  • Any suitable error-correcting code code can be used in place of a (15, 5) Reed- Solomon code, for example a Reed-Solomon code with more or less redundancy, with the same or different symbol and codeword sizes; another block code; or a different kind of code, such as a convolutional code (see, for example, Stephen B. Wicker, Error Control Systems for Digital Communication and Storage, Prentice-Hall 1995, the contents of which a herein incorporated by cross-reference).
  • the physical representation of the tag shown in Figure 5a, includes fixed target structures 15, 16, 17 and variable data areas 18.
  • the fixed target structures allow a sensing device such as the netpage pen to detect the tag and infer its three-dimensional orientation relative to the sensor.
  • the data areas contain representations of the individual bits of the encoded tag data.
  • the tag is rendered at a resolution of 256x256 dots. When printed at 1600 dots per inch this yields a tag with a diameter of about 4 mm.
  • the tag is designed to be surrounded by a "quiet area" of radius 16 dots. Since the quiet area is also contributed by adjacent tags, it only adds 16 dots to the effective diameter of the tag.
  • the tag may include a plurality of target structures.
  • a detection ring 15 allows the sensing device to initially detect the tag. The ring is easy to detect because it is rotationally invariant and because a simple correction of its aspect ratio removes most of the effects of perspective distortion.
  • An orientation axis 16 allows the sensing device to determine the approximate planar orientation of the tag due to the yaw of the sensor. The orientation axis is skewed to yield a unique orientation.
  • Four perspective targets 17 allow the sensing device to infer an accurate two-dimensional perspective transform of the tag and hence an accurate three-dimensional position and orientation of the tag relative to the sensor.
  • the sensing device In order to support "single-click" interaction with a tagged region via a sensing device, the sensing device must be able to see at least one entire tag in its field of view no matter where in the region or at what orientation it is positioned.
  • the required diameter of the field of view of the sensing device is therefore a function of the size and spacing of the tags.
  • the tag image processing and decoding performed by a sensing device such as the netpage pen is shown in Figure 7. While a captured image is being acquired from the image sensor, the dynamic range of the image is determined (at 20). The center of the range is then chosen as the binary threshold for the image 21. The image is then thresholded and segmented into connected pixel regions (i.e. shapes 23) (at 22). Shapes which are too small to represent tag target structures are discarded. The size and centroid of each shape is also computed. Binary shape moments 25 are then computed (at 24) for each shape, and these provide the basis for subsequently locating target structures. Central shape moments are by their nature invariant of position, and can be easily made invariant of scale, aspect ratio and rotation.
  • the ring target structure 15 is the first to be located (at 26).
  • a ring has the advantage of being very well behaved when perspective-distorted. Matching proceeds by aspect-normalizing and rotation-normalizing each shape's moments. Once its second-order moments are normalized the ring is easy to recognize even if the perspective distortion was significant. The ring's original aspect and rotation 27 together provide a useful approximation of the perspective transform.
  • the axis target structure 16 is the next to be located (at 28). Matching proceeds by applying the ring's normalizations to each shape's moments, and rotation-normalizing the resulting moments. Once its second-order moments are normalized the axis target is easily recognized. Note that one third order moment is required to disambiguate the two possible orientations of the axis.
  • the shape is deliberately skewed to one side to make this possible. Note also that it is only possible to rotation-normalize the axis target after it has had the ring ' s normalizations applied, since the perspective distortion can hide the axis target's axis.
  • the axis target's original rotation provides a useful approximation of the tag ' s rotation due to pen yaw 29.
  • the four perspective target structures 17 are the last to be located (at 30). Good estimates of their positions are computed based on their known spatial relationships to the ring and axis targets, the aspect and rotation of the ring, and the rotation of the axis. Matching proceeds by applying the ring's normalizations to each shape's moments. Once their second-order moments are normalized the circular perspective targets are easy to recognize, and the target closest to each estimated position is taken as a match.
  • the original centroids of the four perspective targets are then taken to be the perspective- distorted corners 31 of a square of known size in tag space, and an eight-degree-of- freedom perspective transform 33 is inferred (at 32) based on solving the well-understood equations relating the four tag-space and image-space point pairs (see Heckbert, P., Fundamentals of Texture Mapping and Image Warping, Masters Thesis, Dept. of EECS, U. of California at Berkeley, Technical Report No. UCB/CSD 89/516, June 1989, the contents of which are herein incorporated by cross-reference).
  • the inferred tag-space to image-space perspective transform is used to project (at 36) each known data bit position in tag space into image space where the real-valued position is used to bilinearly interpolate (at 36) the four relevant adjacent pixels in the input image.
  • the previously computed image threshold 21 is used to threshold the result to produce the final bit value 37.
  • each of the six 60-bit Reed-Solomon codewords is decoded (at 38) to yield 20 decoded bits 39, or 120 decoded bits in total. Note that the codeword symbols are sampled in codeword order, so that codewords are implicitly de-interleaved during the sampling process.
  • the ring target 15 is only sought in a subarea of the image whose relationship to the image guarantees that the ring, if found, is part of a complete tag. If a complete tag is not found and successfully decoded, then no pen position is recorded for the current frame.
  • an alternative strategy involves seeking another tag in the current image.
  • the obtained tag data indicates the identity of the region containing the tag and the position of the tag within the region.
  • An accurate position 35 of the pen nib in the region, as well as the overall orientation 35 of the pen, is then inferred (at 34) from the perspective transform 33 observed on the tag and the known spatial relationship between the image sensor (containing the optical axis of the pen) and the nib (which tyically contains the physical axis of the pen).
  • the image sensor is usually offset from the nib.
  • the tag structure described above is designed to support the tagging of non- planar surfaces where a regular tiling of tags may not be possible.
  • a regular tiling of tags i.e. surfaces such as sheets of paper and the like.
  • more efficient tag structures can be used which exploit the regular nature of the tiling.
  • Figure 6a shows a square tag 4 with four perspective targets 17.
  • the tag represents sixty 4-bit Reed-Solomon symbols 47, for a total of 240 bits.
  • the tag represents each one bit as a dot 48, and each zero bit by the absence of the corresponding dot.
  • the perspective targets are designed to be shared between adjacent tags, as shown in Figures 6b and 6c.
  • Figure 6b shows a square tiling of 16 tags and the corresponding minimum field of view 193, which must span the diagonals of two tags.
  • Figure 6c shows a square tiling of nine tags, containing all one bits for illustration purposes.
  • 112 bits of tag data are redundantly encoded to produce 240 encoded bits.
  • the four codewords are interleaved spatially within the tag to maximize resilience to burst errors. Assuming a 16-bit tag ID as before, this allows a region ID of up to 92 bits.
  • the data-bearing dots 48 of the tag are designed to not overlap their neighbors, so that groups of tags cannot produce structures which resemble targets. This also saves ink.
  • the perspective targets therefore allow detection of the tag, so further targets are not required.
  • Tag image processing proceeds as described in section 1.2.4 above, with the exception that steps 26 and 28 are omitted.
  • the tag may contain an orientation feature to allow disambiguation of the four possible orientations of the tag relative to the sensor
  • orientation data can be embedded in the tag data.
  • the four codewords can be arranged so that each tag orientation contains one codeword placed at that orientation, as shown in Figure 6d, where each symbol is labelled with the number of its codeword (1-4) and the position of the symbol within the codeword (A-O).
  • Tag decoding then consists of decoding one codeword at each orientation.
  • Each codeword can either contain a single bit indicating whether it is the first codeword, or two bits indicating which codeword it is.
  • the latter approach has the advantage that if, say, the data content of only one codeword is required, then at most two codewords need to be decoded to obtain the desired data. This may be the case if the region ID is not expected to change within a stroke and is thus only decoded at the start of a stroke. Within a stroke only the codeword containing the tag ID is then desired. Furthermore, since the rotation of the sensing device changes slowly and predictably within a stroke, only one codeword typically needs to be decoded per frame.
  • each bit value (or multi-bit value) is typically represented by an explicit glyph, i.e. no bit value is represented by the absence of a glyph.
  • each tag data must contain a marker pattern, and these must be redundantly encoded to allow reliable detection.
  • the overhead of such marker patterns is similar to the overhead of explicit perspective targets.
  • One such scheme uses dots positioned a various points relative to grid vertices to represent different glyphs and hence different multi-bit values (see Anoto Technology Description, Anoto April 2000).
  • Decoding a tag typically results in a region ID, a tag ID, and a tag-relative pen transform. Before the tag ID and the tag-relative pen location can be translated into an absolute location within the tagged region, the location of the tag within the region must be known. This is given by a tag map, a function which maps each tag ID in a tagged region to a corresponding location.
  • the tag map class diagram is shown in Figure 22, as part of the netpage printer class diagram.
  • a tag map reflects the scheme used to tile the surface region with tags, and this can vary according to surface type. When multiple tagged regions share the same tiling scheme and the same tag numbering scheme, they can also share the same tag map.
  • the tag map for a region must be retrievable via the region ID. Thus, given a region ID, a tag ID and a pen transform, the tag map can be retrieved, the tag ID can be translated into an absolute tag location within the region, and the tag-relative pen location can be added to the tag location to yield an absolute pen location within the region.
  • the tag ID may have a structure which assists translation through the tag map. It may, for example, encode cartesian (x-y) coordinates or polar coordinates, depending on the surface type on which it appears.
  • the tag ID structure is dictated by and known to the tag map, and tag IDs associated with different tag maps may therefore have different structures.
  • the tags usually function in cooperation with associated visual elements on the netpage. These function as user interactive elements in that a user can interact with the printed page using an appropriate sensing device in order for tag data to be read by the sensing device and for an appropriate response to be generated in the netpage system. Additionally (or alternatively), decoding a tag may be used to provide orientation data indicative of the yaw of the pen relative to the surface. The orientation data may be determined using, for example, the orientation axis 16 described above (Section 2.3) or orientation data embedded in the tag data (Section 2.5).
  • FIG. 12 A preferred embodiment of a document and page description class diagram is shown in Figures 12 and 13.
  • a document is described at three levels. At the most abstract level the document 836 has a hierarchical structure whose terminal elements 839 are associated with content objects 840 such as text objects, text style objects, image objects, etc.
  • content objects 840 such as text objects, text style objects, image objects, etc.
  • the document is paginated and otherwise formatted. Formatted terminal elements 835 will in some cases be associated with content objects which are different from those associated with their corresponding terminal elements, particularly where the content objects are style-related.
  • Each printed instance of a document and page is also described separately, to allow input captured through a particular page instance 830 to be recorded separately from input captured through other instances of the same page description.
  • a formatted document 834 consists of a set of formatted page descriptions 5, each of which consists of a set of formatted terminal elements 835. Each formatted element has a spatial extent or zone 58 on the page. This defines the active area of input elements such as hyperlinks and input fields.
  • a document instance 831 corresponds to a formatted document 834. It consists of a set of page instances 830, each of which corresponds to a page description 5 of the formatted document. Each page instance 830 describes a single unique printed netpage 1, and records the page ID 50 of the netpage.
  • a page instance is not part of a document instance if it represents a copy of a page requested in isolation.
  • a page instance consists of a set of terminal element instances 832.
  • An element instance only exists if it records instance-specific information.
  • a hyperlink instance exists for a hyperlink element because it records a transaction ID 55 which is specific to the page instance, and a field instance exists for a field element because it records input specific to the page instance.
  • An element instance does not exist, however, for static elements such as textflows.
  • a terminal element 839 can be a visual element or an input element.
  • a visual element can be a static element 843 or a dynamic element 846.
  • An input element may be, for example, a hyperlink element 844 or a field element 845, as shown in Figure 14. Other types of input element are of course possible, such a input elements, which select a particular mode of the pen 101.
  • a page instance has a background field 833 which is used to record any digital ink captured on the page which does not apply to a specific input element.
  • a tag map 811 is associated with each page instance to allow tags on the page to be translated into locations on the page.
  • a netpage network consists of a distributed set of netpage page servers 10, netpage registration servers 11, netpage ID servers 12, netpage application servers 13, and netpage relay devices 601 connected via a network 19 such as the Internet, as shown in Figure 3.
  • the netpage registration server 11 is a server which records relationships between users, pens, printers and applications, and thereby authorizes various network activities. It authenticates users and acts as a signing proxy on behalf of authenticated users in application transactions. It also provides handwriting recognition services.
  • a netpage page server 10 maintains persistent information about page descriptions and page instances.
  • the netpage network includes any number of page servers, each handling a subset of page instances. Since a page server also maintains user input values for each page instance, clients such as netpage relays 601 send netpage input directly to the appropriate page server. The page server interprets any such input relative to the description of the corresponding page.
  • a netpage ID server 12 allocates document IDs 51 on demand, and provides load-balancing of page servers via its ID allocation scheme.
  • a netpage relay 601 uses the Internet Distributed Name System (DNS), or similar, to resolve a netpage page ID 50 into the network address of the netpage page server 10 handling the corresponding page instance.
  • DNS Internet Distributed Name System
  • a netpage application server 13 is a server which hosts interactive netpage applications.
  • Netpage servers can be hosted on a variety of network server platforms from manufacturers such as IBM, Hewlett-Packard, and Sun. Multiple netpage servers can run concurrently on a single host, and a single server can be distributed over a number of hosts. Some or all of the functionality provided by netpage servers, and in particular the functionality provided by the ID server and the page server, can also be provided directly in a netpage appliance such as a netpage printer, in a computer workstation, or on a local network.
  • a netpage appliance such as a netpage printer, in a computer workstation, or on a local network.
  • the active sensing device of the netpage system may take the form of a clicker
  • a pen 101 uses its embedded controller 134 to capture and decode netpage tags from a page via an image sensor.
  • the image sensor is a solid-state device provided with an appropriate filter to permit sensing at only near-infrared wavelengths.
  • the system is able to sense when the nib is in contact with the surface, and the pen is able to sense tags at a sufficient rate to capture human handwriting (i.e. at 200 dpi or greater and 100 Hz or faster).
  • Information captured by the pen may be encrypted and wirelessly transmitted to the printer (or base station), the printer or base station interpreting the data with respect to the (known) page structure.
  • the preferred embodiment of the netpage pen 101 operates both as a normal marking ink pen and as a non-marking stylus (i.e. as a pointer).
  • the marking aspect is not necessary for using the netpage system as a browsing system, such as when it is used as an Internet interface.
  • Each netpage pen is registered with the netpage system and has a unique pen ID 61.
  • Figure 11 shows the netpage pen class diagram, reflecting pen- related information maintained by a registration server 11 on the netpage network.
  • the nib determines its position and orientation relative to the page.
  • the nib is attached to a force sensor, and the force on the nib is interpreted relative to a threshold to indicate whether the pen is '"up" or "down".
  • the force may be captured as a continuous value to allow, say, the full dynamics of a signature to be verified.
  • the pen determines the position and orientation of its nib on the netpage by imaging, in the infrared spectrum, an area 193 of the page in the vicinity of the nib. It decodes the nearest tag and computes the position of the nib relative to the tag from the observed perspective distortion on the imaged tag and the known geometry of the pen optics. Although the position resolution of the tag may be low, because the tag density on the page is inversely proportional to the tag size, the adjusted position resolution is quite high, exceeding the minimum resolution required for accurate handwriting recognition. Pen actions relative to a netpage are captured as a series of strokes. A stroke consists of a sequence of time-stamped pen positions on the page, initiated by a pen-down event and completed by the subsequent pen-up event.
  • a stroke is also tagged with the page ID 50 of the netpage whenever the page ID changes, which, under normal circumstances, is at the commencement of the stroke.
  • Each netpage pen has a current selection 826 associated with it, allowing the user to perform copy and paste operations etc.
  • the selection is timestamped to allow the system to discard it after a defined time period.
  • the current selection describes a region of a page instance. It consists of the most recent digital ink stroke captured through the pen relative to the background area of the page. It is interpreted in an application-specific manner once it is submitted to an application via a selection hyperlink activation.
  • Each pen has a current nib 824. This is the nib last notified by the pen to the system. In the case of the default netpage pen described above, either the marking black ink nib or the non-marking stylus nib is current.
  • Each pen also has a current nib style 825. This is the nib style last associated with the pen by an application, e.g. in response to the user selecting a color from a palette.
  • the default nib style is the nib style associated with the current nib. Strokes captured through a pen are tagged with the current nib style. When the strokes are subsequently reproduced, they are reproduced in the nib style with which they are tagged.
  • the pen 101 may have one or more buttons 209. As described in US Application
  • the button(s) may be used to determine a mode or behavior of the pen, which, in turn, determines how a stroke or, more generally, interaction data is interpreted by the page server 10.
  • a sequence of captured strokes is referred to as digital ink.
  • Digital ink forms the basis for the digital exchange of drawings and handwriting, for online recognition of handwriting, and for online verification of signatures.
  • the pen is typically wireless and transmits digital ink to the relay device 601 via a short-range radio link.
  • the transmitted digital ink is encrypted for privacy and security and packetized for efficient transmission, but is always flushed on a pen-up event to ensure timely handling in the printer.
  • the pen When the pen is out-of-range of a relay device 601 it buffers digital ink in internal memory, which has a capacity of over ten minutes of continuous handwriting. When the pen is once again within range of a relay device, it transfers any buffered digital ink.
  • a pen can be registered with any number of relay devices, but because all state data resides in netpages both on paper and on the network, it is largely immaterial which relay device a pen is communicating with at any particular time.
  • the netpage relay device 601 receives data relating to a stroke from the pen 101 when the pen is used to interact with a netpage 1.
  • the coded data 3 of the tags 4 is read by the pen when it is used to execute a movement, such as a stroke.
  • the data allows the identity of the particular page to be determined and an indication of the positioning of the pen relative to the page to be obtained.
  • Interaction data typically comprising the page ID 50 and at least one position of the pen, is transmitted to the relay device 601, where it resolves, via the DNS, the page ID 50 of the stroke into the network address of the netpage page server 10 which maintains the corresponding page instance 830. It then transmits the stroke to the page server.
  • Each netpage consists of a compact page layout maintained persistently by a netpage page server (see below).
  • the page layout refers to objects such as images, fonts and pieces of text, typically stored elsewhere on the netpage network.
  • the page server When the page server receives the stroke from the pen, it retrieves the page description to which the stroke applies, and determines which element of the page description the stroke intersects. It is then able to interpret the stroke in the context of the type of the relevant element.
  • a “click” is a stroke where the distance and time between the pen down position and the subsequent pen up position are both less than some small maximum.
  • An object which is activated by a click typically requires a click to be activated, and accordingly, a longer stroke is ignored.
  • the failure of a pen action, such as a "sloppy" click, to register may be indicated by the lack of response from the pen's "ok” LED.
  • Hyperlinks and form fields are two kinds of input elements, which may be contained in a netpage page description. Input through a form field can also trigger the activation of an associated hyperlink. These types of input elements are described in further detail in the above-identified patents and patent applications, the contents of which are herein incorporated by cross-reference.
  • a housing 102 in the form of a plastics moulding having walls 103 defining an interior space 104 for mounting the pen components.
  • Mode selector buttons 209 are provided on the housing 102.
  • the pen top 105 is in operation rotatably mounted at one end 106 of the housing 102.
  • a semi-transparent cover 107 is secured to the opposite end 108 of the housing 102.
  • the cover 107 is also of moulded plastics, and is formed from semi- transparent material in order to enable the user to view the status of the LED mounted within the housing 102.
  • the cover 107 includes a main part 109 which substantially surrounds the end 108 of the housing 102 and a projecting portion 110 which projects back from the main part 109 and fits within a corresponding slot 111 formed in the walls 103 of the housing 102.
  • a radio antenna 112 is mounted behind the projecting portion 110, within the housing 102.
  • Screw threads 113 surrounding an aperture 113A on the cover 107 are arranged to receive a metal end piece 114, including corresponding screw threads 115.
  • the metal end piece 114 is removable to enable ink cartridge replacement.
  • a tri-color status LED 116 is mounted within the cover 107.
  • the antenna 112 is also mounted on the flex PCB 117.
  • the status LED 116 is mounted at the top of the pen 101 for good all-around visibility.
  • the pen can operate both as a normal marking ink pen and as a non-marking stylus.
  • An ink pen cartridge 118 with nib 119 and a stylus 120 with stylus nib 121 are mounted side by side within the housing 102. Either the ink cartridge nib 119 or the stylus nib 121 can be brought forward through open end 122 of the metal end piece 114, by rotation of the pen top 105.
  • Respective slider blocks 123 and 124 are mounted to the ink cartridge 118 and stylus 120, respectively.
  • a rotatable cam barrel 125 is secured to the pen top 105 in operation and arranged to rotate therewith.
  • the cam barrel 125 includes a cam 126 in the form of a slot within the walls 181 of the cam barrel.
  • Cam followers 127 and 128 projecting from slider blocks 123 and 124 fit within the cam slot 126.
  • the slider blocks 123 or 124 move relative to each other to project either the pen nib 119 or stylus nib 121 out through the hole 122 in the metal end piece 114.
  • the pen 101 has three states of operation. By turning the top 105 through 90° steps, the three states are:
  • a second flex PCB 129 is mounted on an electronics chassis 130 which sits within the housing 102.
  • the second flex PCB 129 mounts an infrared LED 131 for providing infrared radiation for projection onto the surface.
  • An image sensor 132 is provided mounted on the second flex PCB 129 for receiving reflected radiation from the surface.
  • the second flex PCB 129 also mounts a radio frequency chip 133, which includes an RF transmitter and RF receiver, and a controller chip 134 for controlling operation of the pen 101.
  • An optics block 135 (formed from moulded clear plastics) sits within the cover 107 and projects an infrared beam onto the surface and receives images onto the image sensor 132.
  • Power supply wires 136 connect the components on the second flex PCB 129 to battery contacts 137 which are mounted within the cam barrel 125.
  • a terminal 138 connects to the battery contacts 137 and the cam barrel 125.
  • a three volt rechargeable battery 139 sits within the cam barrel 125 in contact with the battery contacts.
  • An induction charging coil 140 is mounted about the second flex PCB 129 to enable recharging of the battery 139 via induction.
  • the second flex PCB 129 also mounts an infrared LED 143 and infrared photodiode 144 for detecting displacement in the cam barrel 125 when either the stylus 120 or the ink cartridge 118 is used for writing, in order to enable a determination of the force being applied to the surface by the pen nib 119 or stylus nib 121.
  • the IR photodiode 144 detects light from the IR LED 143 via reflectors (not shown) mounted on the slider blocks 123 and 124.
  • top 105 also includes a clip 142 for clipping the pen 101 to a pocket.
  • the pen 101 is arranged to determine the position of its nib (stylus nib 121 or ink cartridge nib 119) by imaging, in the infrared spectrum, an area of the surface in the vicinity of the nib. It records the location data from the nearest location tag, and is arranged to calculate the distance of the nib 121 or 119 from the location tab utilising optics 135 and controller chip 134.
  • the controller chip 134 calculates the orientation (yaw) of the pen using an orientation indicator in the imaged tag, and the nib-to-tag distance from the perspective distortion observed on the imaged tag.
  • the pen 101 can transmit the digital ink data (which is encrypted for security and packaged for efficient transmission) to the computing system.
  • the digital ink data is transmitted as it is formed.
  • digital ink data is buffered within the pen 101 (the pen 101 circuitry includes a buffer arranged to store digital ink data for approximately 12 minutes of the pen motion on the surface) and can be transmitted later.
  • the controller 134 notifies the system of the pen ID, nib ID 175, current absolute time 176, and the last absolute time it obtained from the system prior to going offline.
  • the pen ID allows the computing system to identify the pen when there is more than one pen being operated with the computing system.
  • the nib ID allows the computing system to identify which nib (stylus nib 121 or ink cartridge nib 119) is presently being used.
  • the computing system can vary its operation depending upon which nib is being used. For example, if the ink cartridge nib 119 is being used the computing system may defer producing feedback output because immediate feedback is provided by the ink markings made on the surface. Where the stylus nib 121 is being used, the computing system may produce immediate feedback output. Since a user may change the nib 119, 121 between one stroke and the next, the pen 101 optionally records a nib ID for a stroke 175. This becomes the nib ID implicitly associated with later strokes.
  • Cartridges having particular nib characteristics may be interchangeable in the pen.
  • the pen controller 134 may interrogate a cartridge to obtain the nib ID 175 of the cartridge.
  • the nib ID 175 may be stored in a ROM or a barcode on the cartridge.
  • the controller 134 notifies the system of the nib ID whenever it changes. The system is thereby able to determine the characteristics of the nib used to produce a stroke 175, and is thereby subsequently able to reproduce the characteristics of the stroke itself.
  • the controller chip 134 is mounted on the second flex PCB 129 in the pen 101.
  • Figure 10 is a block diagram illustrating in more detail the architecture of the controller chip 134.
  • Figure 10 also shows representations of the RF chip 133, the image sensor 132, the tri-color status LED 116, the IR illumination LED 131, the IR force sensor LED 143, and the force sensor photodiode 144.
  • the pen controller chip 134 includes a controlling processor 145.
  • Bus 146 enables the exchange of data between components of the controller chip 134.
  • Flash memory 147 and a 512 KB DRAM 148 are also included.
  • 149 is arranged to convert the analog signal from the force sensor photodiode 144 to a digital signal.
  • An image sensor interface 152 interfaces with the image sensor 132.
  • a transceiver controller 153 and base band circuit 154 are also included to interface with the RF chip 133 which includes an RF circuit 155 and RF resonators and inductors 156 connected to the antenna 112.
  • the controlling processor 145 captures and decodes location data from tags from the surface via the image sensor 132, monitors the force sensor photodiode 144, controls the LEDs 116, 131 and 143, and handles short-range radio communication via the radio transceiver 153. It is a medium-performance (-4OMHz) general-purpose RISC processor.
  • the processor 145 digital transceiver components (transceiver controller 153 and baseband circuit 154), image sensor interface 152, flash memory 147 and 512KB DRAM 148 are integrated in a single controller ASIC.
  • Analog RF components RF circuit 155 and RF resonators and inductors 156) are provided in the separate RF chip.
  • the image sensor is a 215x215 pixel CCD (such a sensor is produced by
  • the controller ASIC 134 enters a quiescent state after a period of inactivity when the pen 101 is not in contact with a surface. It incorporates a dedicated circuit 150 which monitors the force sensor photodiode 144 and wakes up the controller 134 via the power manager 151 on a pen-down event.
  • the radio transceiver communicates in the unlicensed 900MHz band normally used by cordless telephones, or alternatively in the unlicensed 2.4GHz industrial, scientific and medical (ISM) band, and uses frequency hopping and collision detection to provide interference-free communication.
  • ISM industrial, scientific and medical
  • the pen incorporates an Infrared Data Association (IrDA) interface for short-range communication with a base station or netpage printer.
  • IrDA Infrared Data Association
  • the pen 101 includes a pair of orthogonal accelerometers mounted in the normal plane of the pen 101 axis.
  • the accelerometers 190 are shown in Figures 9 and 10 in ghost outline.
  • each location tag ID can then identify an object of interest rather than a position on the surface. For example, if the object is a user interface input element (e.g. a command button), then the tag ID of each location tag within the area of the input element can directly identify the input element.
  • a user interface input element e.g. a command button
  • the acceleration measured by the accelerometers in each of the x and y directions is integrated with respect to time to produce an instantaneous velocity and position.
  • a Netpage pen 101 can be used to generate cursor control commands (i.e. typically mouse events) to allow seamless transitions between paper interactions and on-screen interactions.
  • a computer system associated with the display device may receive cursor control data (in the form of relative motion data) directly from the pen 101, with the pen performing the necessary processing to generate the cursor control data from the interaction data.
  • the computer system may receive interaction data as usual from the pen 101, and then generate cursor control commands for an associated display device.
  • the computer system may be a remote server which receives interaction data from the Netpage pen 101 and transmits cursor control commands to a display device near the user (e.g. mobile phone). Any of these system architectures may support cursor control, although generation of cursor control commands in the pen is generally preferred.
  • a cursor control behaviour can be selected in various ways, including via a momentary or persistent mode switch.
  • cursor control behaviour may also be automatically selected in the absence of a Netpage tag pattern, if the Netpage pen 101 incorporates a motion sensor that functions in the absence of a tag pattern.
  • positions generated by a Netpage pen 101 are intrinsically absolute, such as when generated at least partially with reference to a Netpage tag pattern, then such positions can be trivially converted into absolute cursor control commands.
  • the extent of the physical surface with which the sensing device is interacting is ideally mapped to the extent of a display device for the purposes of translating sensing device positions into cursor control commands.
  • cursor control commands commonly specify changes in position rather than absolute positions - i.e. they are relative position changes.
  • Absolute positions generated by a Netpage pen 101 by reading tags 4 may be trivially converted into relative cursor control commands.
  • the relative scale of physical displacements of the Netpage reader and corresponding screen displacements can be specified as a matter of user preference, as well as whether the mapping is absolute or relative.
  • relative displacements of the pen 101 may be readily calculated from absolute motion data, the implementation of cursor control behaviour from absolute positions is problematic in practice. In order for cursor control behaviour to appear 'natural' from a user's perspective, cursor movements on a screen should always follow substantially the movement of the pen 101 i.e.
  • a left pen movement moves the on-screen cursor to the left, an up pen movement moves the on-screen cursor upwards etc.
  • on-screen cursor movements naturally reflect movement of the pen 101.
  • the on-screen cursor movements will be confusing and unnatural for the user i.e. a left pen movement would move the on-screen cursor to the right. Since users are not accustomed to orienting their traditional mousepads in a certain way, they equally would not expect to orient their Netpage-tagged surface in a certain way in order to invoke natural cursor-control behaviour. In other words, it is desirable that the surface and pen 101 should invoke natural mouse behaviour irrespective of how the surface is oriented.
  • the yaw of the pen 101 relative to the surface may be calculated by making use of this orientation information. If orientation data as well as absolute motion data is received by a computer system, then the computer system can determine movement of the pen 101 relative to itself by taking into account the yaw of the pen relative to the surface. Hence, movement of the pen, from a user's perspective, is substantially translated into a corresponding on-screen cursor movement. In this way, the pen 101 may be used to control naturally the movement of the cursor, irrespective of the orientation of the surface.
  • positions generated by a Netpage pen 101 are intrinsically relative, such as when they are generated using a relative motion sensing mechanism (e.g. accelerometers, mechanical mouse, optical mouse etc), then they map naturally to relative cursor control commands, again with a (potentially user-specified) scale factor.
  • a relative motion sensing mechanism e.g. accelerometers, mechanical mouse, optical mouse etc
  • Cursor control is a subset of graphical user interface (GUI) input and control in general, including functions such as:
  • Scrolling can be supported in the conventional way via a scroll wheel on the Netpage pen 101. It can also be provided implicitly as part of a cursor control behaviour, when motion sensing at least partially occurs with reference to a Netpage tag pattern, by reserving part of the extent of each printed Netpage tag pattern as a scroll region. For example, the right-hand couple of inches of each printed Netpage might be reserved for vertical scrolling, and the bottom couple of inches might be reserved for horizontal scrolling.
  • This scroll region may be active when it is determined that the pen 101 is operating in a cursor mode (for example, by a mode switch on the pen).
  • Scrolling like cursor control, can be absolute or relative, and can be specified as a matter of user preference.
  • the dominant direction of the user's scroll gesture within a single scroll region, relative to the orientation of the tag pattern can also be used to distinguish vertical from horizontal scrolling.
  • a threshold can be imposed on the vertical and horizontal components of the user's gesture to prevent inadvertent diagonal scrolling.
  • one or two scroll mode selection switches can be provided on the pen 101.
  • Figure 15 shows cursor control and scrolling functions mapped onto an arbitrary Netpage tagged page.
  • the functions are only operative when the Netpage pen's cursor control behaviour is selected.
  • cursor control and scrolling actions may be performed via certain regions of any Netpage, even though there may be no explicit visual indication of this functionality in the visible content of the page. For example, scroll regions along a righthand and bottom edge region may be standard for any Netpage. These scroll regions would be active only in the cursor mode.
  • Figure 16 shows cursor control and scrolling functions explicitly mapped onto a page with a matching visual layout.
  • the page shown in Figure 16 is a dedicated cursor control page, specifically tailored for GUI control, including cursor control and scrolling actions.
  • Figure 17 shows cursor control, scrolling functions and keyboard keys explicitly mapped onto a page with a matching visual layout.
  • the functions are self-selecting, and so can be operative in any pen behaviour.
  • Any function or key input can also be generated via a suitable Netpage tagged surface, either via tags specifically coded to indicate corresponding functions or keys, or via a page description that indicates corresponding functions or keys in the usual way.
  • the former has the advantage that the input can be identified without consulting the page description 5.
  • the user's relay device 601 which may commonly be the user's display device, to capture such input without recourse to the Netpage server 10.
  • any function or key input generated by the server with reference to the page description can be routed to the user's display device via the netpage architecture (see Figure 1 and US Patent Publication No. 2006/025175, the contents of which is herein incorporated by reference).
  • This has the advantage that it supports a display device that is separate from the relay device.
  • Printed controls can also be provided for selecting one-shot or persistent modes, such as a scrolling mode.
  • Printed controls may be selected via a typical netpage interaction, whereby a user selects an interactive control element on a page, and this interaction is interpreted as a mode selection in the computer system via the page description.

Abstract

A system for controlling movement of a cursor on a display device, the system comprising: a substrate having a position-coding pattern disposed on or in a surface thereof; a sensing device comprising: an image sensor for optically imaging the position-coding pattern; and a processor configured for: generating absolute motion data by determining a plurality of absolute positions of the sensing device relative to the surface using the imaged position- coding pattern; generating orientation data indicative of an orientation of the sensing device relative to the substrate; and using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of relative motion of the sensing device from the perspective of a user; and communication means for communicating the relative motion data to a computer system; and the computer system configured for: receiving said relative motion data from the sensing device; interpreting said relative motion data as cursor movement; and generating cursor control commands for said display device.

Description

SYSTEM FOR CONTROLLING MOVEMENT OF A CURSOR ON A DISPLAY
DEVICE
FIELD OF INVENTION The present invention relates to a method and system for reading a position- coding pattern disposed on a surface. It has been developed primarily to improve the functionality of sensing device used for reading the position-coding pattern.
BACKGROUND The Applicant has previously described a method of enabling users to access information from a computer system via a printed substrate e.g. paper. The substrate has coded data printed thereon, which is read by an optical sensing device when the user interacts with the substrate using the sensing device. A computer receives interaction data from the sensing device and uses this data to determine what action is being requested by the user. For example, a user may make make handwritten input onto a form or make a selection gesture around a printed item. This input is interpreted by the computer system with reference to a page description corresponding to the printed substrate.
It would be desirable to provide a broader range of functionalities available to the user via the sensing device. It would be particularly desirable to provide this broader range of functionalities without introducing a plethora of separate functional systems into the sensing device.
SUMMARY OF INVENTION
In a first aspect the present invention provides a system for controlling movement of a cursor on a display device, the system comprising: a substrate having a position-coding pattern disposed on or in a surface thereof; a sensing device comprising: an image sensor for optically imaging the position-coding pattern; and a processor configured for: generating absolute motion data by determining a plurality of absolute positions of the sensing device relative to the surface using the imaged position-coding pattern; generating orientation data indicative of an orientation of the sensing device relative to the substrate; and using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of relative motion of the sensing device from the perspective of a user; and communication means for communicating the relative motion data to a computer system; and the computer system configured for: receiving said relative motion data from the sensing device; interpreting said relative motion data as cursor movement; and generating cursor control commands for said display device.
Optionally, the relative motion data is indicative of relative position changes of the sensing device substantially from a perspective of a user, and irrespective of an orientation of said substrate.
Optionally, the position-coding pattern comprises a plurality of tags, each tag identifying a location on the surface and a rotational orientation of the tag relative to the substrate, thereby enabling a yaw of the sensing device relative to the substrate to be determined.
Optionally, said display device is selected from at least one of: a display device associated with the computer system a display device integral with the computer system; and a display device remote from the computer system.
Optionally, said sensing device is operable in a plurality of modes, said plurality including a cursor mode and at least one other mode, and wherein the computer system is further configured for: determining that said sensing device is operating in a cursor mode.
Optionally, said at least one other mode is selected from the group comprising: a scroll mode; a hyperlinking mode; a searching mode; a content-extraction mode; and a handwriting mode. Optionally, said sensing device comprises a mode selector, and said interaction data comprises mode data indicative of said cursor mode.
Optionally, said mode selector comprises at least one of: one or more mode buttons operable by a user; and a sensor for detecting a force exerted by said sensing device on said surface.
Optionally, said computer system is configured for retrieving stored mode data indicative of a most recent mode selected for said sensing device.
Optionally, said computer system is further configured for: determining if said sensing device is positioned within a cursor zone of said substrate, said cursor zone being activated by determination of said cursor mode; and interpreting relative motion of said sensing device only within said cursor zone as said cursor movement.
Optionally, said computer system is further configured for: determining if said sensing device is positioned within a scroll zone of said substrate, said scroll zone being activated by determination of said cursor mode; and interpreting the interaction of said sensing device within said scroll zone as a scroll action; scrolling a page displayed on said display device according to said scroll action.
Optionally, said computer system is configured for at least one of: interpreting at least one absolute position of said sensing device within said scroll zone to be indicative of a scroll direction; and interpreting relative motion of said sensing device within said scroll zone to be indicative of a scroll direction. Optionally, the position-coding pattern is further indicative of an identity of the substrate and the interaction data comprises substrate identity data.
Optionally, the substrate is a cursor control substrate and said computer system is configured for using the substrate identity data to retrieve a cursor page description corresponding to said cursor control substrate, said cursor page description comprising a cursor zone within which the interaction of said sensing device is interpreted as said cursor movement.
Optionally, said cursor page description comprises a scroll zone within which the interaction of said sensing device is interpreted as a scroll action, and wherein said computer system is configured to scroll a page displayed on said display device according to said scroll action.
Optionally, said cursor control substrate has visible markings indicating at least one of: said cursor zone, said scroll zone and a scroll direction.
Optionally, said scroll zone is located at an edge region of said substrate.
In a further aspect the present invention provides a method of controlling movement of a cursor on a display device via a substrate having a position-coding pattern disposed on or in a surface thereof, said method comprising the steps of: receiving, in a computer system, interaction data indicative of an interaction of the sensing device with the substrate, said interaction data comprising: absolute motion data indicative of a plurality of absolute positions of the sensing device relative to the surface; and orientation data indicative of an orientation of the sensing device relative to the substrate; and using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of position changes of the sensing device relative to itself; interpreting said relative motion data as cursor movement; and generating cursor control commands for said display device. In another aspect the present invention provides a sensing device for controlling movement of a cursor on a display device, said sensing device comprising: an image sensor for optically imaging a position-coding pattern disposed on or in a surface; and a processor configured for: generating absolute motion data by determining a plurality of absolute positions of the sensing device relative to the surface using the imaged position- coding pattern; generating orientation data indicative of an orientation of the sensing device relative to the substrate; and using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of relative motion of the sensing device from the perspective of a user; and communication means for communicating the relative motion data to a computer system, thereby enabling the computer system to generate cursor control commands using the relative motion data for controlling movement of the cursor on the display device.
In a another aspect the present invention provides a computer system for controlling movement of a cursor on a display device via a substrate having a position-coding pattern disposed on or in a surface thereof, said computer system being configured for: receiving interaction data indicative of an interaction of the sensing device with the substrate, said interaction data comprising: absolute motion data indicative of a plurality of absolute positions of the sensing device relative to the surface; and orientation data indicative of an orientation of the sensing device relative to the substrate; using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of position changes of the sensing device relative to itself; interpreting said relative motion data as cursor movement; and generating cursor control commands for said display device. In a second aspect the present invention provides a sensing device for interaction with a surface, said sensing device having automatic mode selection, said sensing device comprising: an image sensor for imaging the surface and generating image data; a motion sensor configured for determining one or more relative position changes of the sensing device; a processor configured for: receiving the image data; and automatically selecting, using said image data, either an interaction mode or a cursor mode for said sensing device; and communication means for transmitting either interaction data or cursor data to a computer system, dependent on said selected mode, wherein said processor is configured to: select the interaction mode and generate interaction data from the image data if said image data indicates that said sensing device is interacting with a first surface having a position-coding pattern disposed thereon, said interaction data being indicative of at least one absolute location of the sensing device relative to the surface; and select the cursor mode if said image data indicates that said sensing device is interacting with a second surface lacking a position-coding pattern, said cursor data being indicative of said one or more relative position changes of the sensing device.
Optionally, the motion sensor is selected from any one of the group comprising: at least one accelerometer; a mechanical mouse; an optical mouse; and a point interferometry device.
Optionally, said motion sensor is an optical mouse utilizing at least one of: a pattern-based optical mouse technique; a texture-based optical mouse technique; and a laser-speckle- based optical mouse technique.
Optionally, the position-coding pattern of the first surface is indicative of a plurality of locations on the surface and of an identity of a region.
Optionally, in said interaction mode, said processor is configured for determining the identity of the region using the imaged position-coding pattern, and said interaction data is further indicative of the identity of the region.
Optionally, the identity of the region is coincident with an identity of the surface.
Optionally, the position-coding pattern is comprised of a plurality of tags, each tag identifying the identity of the surface and a location of the tag on the surface.
In a further aspect the present invention provides a system for initiating an action corresponding to interaction of a sensing device relative to a surface, said system comprising: (A) the sensing device comprising: an image sensor for imaging the surface and generating image data; a motion sensor configured for determining one or more relative position changes of the sensing device; a processor configured for: receiving the image data; and automatically selecting, using said image data, either an interaction mode or a cursor mode for said sensing device; and communication means for transmitting either interaction data or cursor data to a computer system, dependent on said selected mode, wherein said processor is configured to: select the interaction mode and generate interaction data if said image data indicates that said sensing device is interacting with a first surface having a position- coding pattern disposed thereon, said interaction data being indicative of an absolute location of the sensing device relative to the surface; and select the cursor mode if said image data indicates that said sensing device is interacting with a second surface lacking a position-coding pattern, said cursor data being indicative of said one or more relative position changes of the sensing device; and (B) the computer system configured for: receiving the interaction data and the cursor data from the sensing device; interpreting said interaction data to initiate an action corresponding to said interaction with said surface; and interpreting said cursor data to control movement of a cursor on a display device.
Optionally, said action initiated by said interaction data is selected from at least one of: hyperlinking; form- filling; searching; and content-extraction.
Optionally, the position-coding pattern of the first surface is indicative of a plurality of locations on the surface and of an identity of a region.
Optionally, in said interaction mode, said processor is configured for determining the identity of the region using the imaged position-coding pattern, and said interaction data is further indicative of the identity of the region.
Optionally, said computer system is configured to interpret said interaction data by the steps of: identifying and retrieving a page description corresponding to the first surface using the identity of the region; determining a request using the retrieved page description and the interaction data; and initiating an action based on said request.
Optionally, the identity of the region is coincident with an identity of the surface.
Optionally, the position-coding pattern is comprised of a plurality of tags, each tag identifying the identity of the surface and a location of the tag on the surface.
Optionally, display device is selected from at least one of: a display device associated with the computer system a display device integral with the computer system; and a display device remote from the computer system.
In a further aspect the present invention provides a method of automatically selecting a mode of a sensing device interacting with a surface, said sensing device comprising a motion sensor configured for determining one or more relative position changes of the sensing device, said method comprising the steps of: imaging the surface and generating image data; automatically selecting, using said image data, either an interaction mode or a cursor mode for said sensing device; and transmitting either interaction data or cursor data to a computer system, dependent on said selected mode, wherein: the interaction mode is selected and the interaction data is generated from the image data if said image data indicates that said sensing device is interacting with a first surface having a position-coding pattern disposed thereon, said interaction data being indicative of at least one absolute location of the sensing device relative to the surface; and the cursor mode is selected and one or more relative position changes of the sensing device are determined if said image data indicates that said sensing device is interacting with a second surface lacking a position-coding pattern, said cursor data being indicative of said one or more relative position changes of the sensing device.
In a further aspect the method further comprising the steps of: receiving the interaction data in the computer system; and interpreting said interaction data to initiate an action corresponding to said interaction with said surface.
In another aspect the method further comprising the steps of: receiving the cursor data from the sensing device; interpreting said cursor data to control movement of a cursor on a display device.
In a third aspect the present invention provides a system for enabling scrolling of a page displayed on a display device, the system comprising: a substrate having a position-coding pattern disposed on or in a surface thereof; a sensing device operable in a plurality of modes including a cursor mode, said sensing device comprising: an image sensor for optically imaging the position-coding pattern; and a processor configured for generating interaction data indicative of an interaction of the sensing device with the surface, said interaction data being indicative of a position or movement of the sensing device relative to the surface; communication means for communicating the interaction data to a computer system; and the computer system configured for: receiving the interaction data from the sensing device; determining that said sensing device is operating in a cursor mode; determining a scroll zone for said substrate; determining if said sensing device is positioned within said scroll zone; interpreting said position or movement of said sensing device within said scroll zone as a scrolling action; and generating a scroll control command for said display device so as to scroll said displayed page.
Optionally, the sensing device is operable in two or more modes selected from the group comprising: said cursor mode; a hyperlinking mode; a searching mode; a content- extraction mode; and a handwriting mode.
Optionally, said sensing device comprises a mode selector, and said interaction data comprises mode data indicative of said cursor mode.
Optionally, said mode selector comprises at least one of: one or more mode buttons operable by a user; and a sensor for detecting a force exerted by said sensing device on said surface.
Optionally, said computer system is configured for retrieving stored mode data indicative of a most recent mode selected for said sensing device.
Optionally, said computer system is configured for at least one of: interpreting at least one absolute position of said sensing device within said scroll zone to be indicative of a scroll direction; and interpreting movement of said sensing device within said scroll zone to be indicative of a scroll direction. Optionally, said scroll direction is selected from at least one of: vertical scrolling; horizontal scrolling; and diagonal scrolling.
Optionally, said scroll zone is located at an edge region of said substrate.
Optionally, said substrate comprises a plurality of scroll zones.
Optionally, said substrate comprises visible markings indicating at least one of: said scroll zone and a scroll direction.
Optionally, said computer system is further configured for: determining if said sensing device is positioned within a cursor zone of said substrate; interpreting movement of said sensing device as a cursor movement; and generating cursor control commands for said display device.
Optionally, said display device is selected from at least one of: a display device associated with the computer system a display device integral with the computer system; and a display device remote from the computer system.
In a further aspect the system further comprising the display device.
Optionally, the position-coding pattern is further indicative of an identity of the substrate and the interaction data comprises substrate identity data.
Optionally, said computer system is configured for retrieving a page description corresponding to said substrate using said substrate identity.
Optionally, said computer is configured for retrieving said page description if it is determined that said sensing device is not operating in said cursor mode.
Optionally, said computer system is configured for: using said position or movement of said sensing device together with said retrieved page description to interpret said interaction of said sensing device with said substrate; and initiate an action corresponding to said interaction.
In another aspect the present invention provides a method of enabling scrolling of a page displayed on a display device via a substrate having position-coding pattern disposed in or on a surface thereof, said method comprising, in a computer system, the steps of: receiving interaction data indicative of an interaction of the sensing device with the substrate, said interaction data being indicative of a position or movement of the sensing device relative to the surface; determining that said sensing device is operating in a cursor mode; determining a scroll zone for said substrate; determining if said sensing device is positioned within a scroll zone of said substrate; interpreting said position or movement of said sensing device within said scroll zone as a scrolling action; and generating a scroll control command for said display device so as to scroll said displayed page.
In a further aspect the present invention provides a computer system for controlling scrolling of a page displayed on a display device, said computer system being configured for: receiving interaction data indicative of an interaction of the sensing device with a substrate having a position-coding pattern disposed on or in a surface thereof, said interaction data being indicative of a position or movement of the sensing device relative to the surface; determining that said sensing device is operating in a cursor mode; determining a scroll zone for said substrate; determining if said sensing device is positioned within a scroll zone of said substrate; interpreting said position or movement of said sensing device within said scroll zone as a scrolling action; and generating a scroll control command for said display device so as to scroll said displayed page.
In a fourth aspect the present invention provides a system for enabling user input and control of a cursor on a display device, the system comprising: a substrate having a position-coding pattern disposed on or in a surface thereof, said substrate having at least one input element and a discrete cursor zone; a sensing device comprising: an image sensor for optically imaging the position-coding pattern; and a processor configured for generating interaction data indicative of an interaction of the sensing device with the surface, said interaction data being indicative of a position or movement of the sensing device relative to the surface; communication means for communicating the interaction data to a computer system; and the computer system configured for: receiving the interaction data from the sensing device; retrieving a page description corresponding to said substrate; determining whether said sensing device is positioned within said cursor zone; interpreting movement of said sensing device within said cursor zone as a cursor movement and generating corresponding cursor control commands for said display device; and otherwise determining if said position or movement of said sensing device is within a zone of said at least one input element and initiating an action corresponding to said at least one input element.
Optionally, said at least one user input element is a GUI control button and said action is a corresponding GUI control action.
Optionally, said GUI control action is selected from the group comprising: scrolling; web browser control; page up; page down; cut; copy; paste; tab between GUI applications; launching of a GUI application; volume control; log off; sleep; and keyboard input.
Optionally, said substrate is an explicitly dedicated GUI control substrate, said substrate comprising visible markings indicating said cursor zone and said at least one GUI control button.
Optionally, said at least one input element is a hyperlink element, and said action is hyperlinking.
Optionally, said substrate comprises a scroll zone, and said computer system is configured for interpreting a position or movement of said sensing device within said scroll zone to be indicative of a scrolling action.
Optionally, said computer system is configured for at least one of: interpreting at least one absolute position of said sensing device within said scroll zone to be indicative of a scroll direction; and interpreting movement of said sensing device within said scroll zone to be indicative of a scroll direction.
Optionally, said scroll direction is selected from at least one of: vertical scrolling; horizontal scrolling; and diagonal scrolling.
Optionally, said substrate comprises a plurality of scroll zones.
Optionally, said substrate comprises visible markings indicating at least one of: said scroll zone and a scroll direction.
Optionally, said computer system is configured for interpreting movement of said sensing device within said cursor zone as relative movement. Optionally, said computer system is configured for interpreting the position or movement of said sensing device outside said cursor zone as an absolute position or movement relative to the surface.
Optionally, said display device is selected from at least one of: a display device associated with the computer system a display device integral with the computer system; and a display device remote from the computer system.
In a further aspect the system further comprising the display device.
Optionally, the position-coding pattern is further indicative of an identity of the substrate and the interaction data comprises substrate identity data.
Optionally, said computer system is configured for retrieving the page description corresponding to said substrate using said substrate identity.
In another aspect the present invention provides a method of enabling user input and control of a cursor on a display device via a substrate having position-coding pattern disposed in or on a surface thereof, said substrate having at least one input element and a discrete cursor zone, said method comprising, in a computer system, the steps of: receiving interaction data indicative of an interaction of the sensing device with the substrate, said interaction data being indicative of a position or movement of the sensing device relative to the surface; retrieving a page description corresponding to said substrate; determining whether said sensing device is positioned within said cursor zone; interpreting movement of said sensing device within said cursor zone as a cursor movement and generating corresponding cursor control commands for said display device; and otherwise determining if said position or movement of said sensing device is within a zone of said at least one input element and initiating an action corresponding to said at least one input element. In a further aspect the present invention provides a computer system for enabling user input and control of a cursor on a display device via a substrate having position-coding pattern disposed in or on a surface thereof, said substrate comprising at least one input element and a discrete cursor zone, said computer system being configured for: receiving interaction data indicative of an interaction of the sensing device with the substrate, said interaction data being indicative of a position or movement of the sensing device relative to the surface; retrieving a page description corresponding to said substrate; determining whether said sensing device is positioned within said cursor zone; interpreting movement of said sensing device within said cursor zone as a cursor movement and generating corresponding cursor control commands for said display device; and otherwise determining if said position or movement of said sensing device is within a zone of said at least one input element and initiating an action corresponding to said at least one input element.
BRIEF DESCRIPTION OF DRAWINGS
Preferred and other embodiments of the invention will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which: Figure 1 shows an embodiment of basic netpage architecture;
Figure 2 is a schematic of a the relationship between a sample printed netpage and its online page description;
Figure 3 shows an embodiment of basic netpage architecture with various alternatives for the relay device; Figure 3A illustrates a collection of netpage servers, Web terminals, printers and relays interconnected via a network;
Figure 4 is a schematic view of a high-level structure of a printed netpage and its online page description;
Figure 5 A is a plan view showing a structure of a netpage tag; Figure 5B is a plan view showing a relationship between a set of the tags shown in Figure 5a and a field of view of a netpage sensing device in the form of a netpage pen; Figure 6A is a plan view showing an alternative structure of a netpage tag; Figure 6B is a plan view showing a relationship between a set of the tags shown in Figure
6a and a field of view of a netpage sensing device in the form of a netpage pen;
Figure 6C is a plan view showing an arrangement of nine of the tags shown in Figure 6a where targets are shared between adjacent tags; Figure 6D is a plan view showing the interleaving and rotation of the symbols of the four codewords of the tag shown in Figure 6a;
Figure 7 is a flowchart of a tag image processing and decoding algorithm;
Figure 8 is a perspective view of a netpage pen and its associated tag-sensing field-of-view cone; Figure 9 is a perspective exploded view of the netpage pen shown in Figure 8;
Figure 10 is a schematic block diagram of a pen controller for the netpage pen shown in
Figures 8 and 9;
Figure 11 is a schematic view of a pen class diagram;
Figure 12 is a schematic view of a document and page description class diagram; Figure 13 is a schematic view of a document and page ownership class diagram;
Figure 14 is a schematic view of a terminal element specialization class diagram;
Figure 15 shows cursor control and scroll functions mapped onto an arbitrary page;
Figure 16 shows an explicit cursor control and scroll page; and
Figure 17 shows an explicit cursor control, scroll and keyboard page.
DETAILED DESCRIPTION OF PREFERRED AND OTHER EMBODIMENTS
Note: Memjet™ is a trade mark of Silverbrook Research Pty Ltd, Australia.
In the preferred embodiment, the invention is configured to work with the netpage networked computer system, a detailed overview of which follows. It will be appreciated that not every implementation will necessarily embody all or even most of the specific details and extensions discussed below in relation to the basic system. However, the system is described in its most complete form to reduce the need for external reference when attempting to understand the context in which the preferred embodiments and aspects of the present invention operate. In brief summary, the preferred form of the netpage system employs a computer interface in the form of a mapped surface, that is, a physical surface which contains references to a map of the surface maintained in a computer system. The map references can be queried by an appropriate sensing device. Depending upon the specific implementation, the map references may be encoded visibly or invisibly, and defined in such a way that a local query on the mapped surface yields an unambiguous map reference both within the map and among different maps. The computer system can contain information about features on the mapped surface, and such information can be retrieved based on map references supplied by a sensing device used with the mapped surface. The information thus retrieved can take the form of actions which are initiated by the computer system on behalf of the operator in response to the operator's interaction with the surface features.
In its preferred form, the netpage system relies on the production of, and human interaction with, netpages. These are pages of text, graphics and images printed on ordinary paper, but which work like interactive webpages. Information is encoded on each page using ink which is substantially invisible to the unaided human eye. The ink, however, and thereby the coded data, can be sensed by an optically imaging sensing device and transmitted to the netpage system. The sensing device may take the form of a clicker (for clicking on a specific position on a surface), a pointer having a stylus (for pointing or gesturing on a surface using pointer strokes), or a pen having a marking nib (for marking a surface with ink when pointing, gesturing or writing on the surface). References herein to "pen" or "netpage pen" are provided by way of example only. It will, of course, be appreciated that the pen may take the form of any of the sensing devices described above. In one embodiment, active buttons and hyperlinks on each page can be clicked with the sensing device to request information from the network or to signal preferences to a network server. In one embodiment, text written by hand on a netpage is automatically recognized and converted to computer text in the netpage system, allowing forms to be filled in. In other embodiments, signatures recorded on a netpage are automatically verified, allowing e-commerce transactions to be securely authorized. In other embodiments, text on a netpage may be clicked or gestured to initiate a search based on keywords indicated by the user.
As illustrated in Figure 2, a printed netpage 1 can represent a interactive form which can be filled in by the user both physically, on the printed page, and "electronically", via communication between the pen and the netpage system. The example shows a "Request" form containing name and address fields and a submit button. The netpage consists of graphic data 2 printed using visible ink, and coded data 3 printed as a collection of tags 4 using invisible ink. The corresponding page description 5, stored on the netpage network, describes the individual elements of the netpage. In particular it describes the type and spatial extent (zone) of each interactive element (i.e. text field or button in the example), to allow the netpage system to correctly interpret input via the netpage. The submit button 6, for example, has a zone 7 which corresponds to the spatial extent of the corresponding graphic 8.
As illustrated in Figures 1 and 3, a netpage sensing device 101, such as the pen shown in Figures 8 and 9 and described in more detail below, works in conjunction with a netpage relay device 601, which is an Internet-connected device for home, office or mobile use. The pen is wireless and communicates securely with the netpage relay device 601 via a short-range radio link 9. In an alternative embodiment, the netpage pen 101 utilises a wired connection, such as a USB or other serial connection, to the relay device 601.
The relay device 601 performs the basic function of relaying interaction data to a page server 10, which interprets the interaction data. As shown in Figure 3, the relay device 601 may, for example, take the form of a personal computer 601a, a netpage printer 601b or some other relay 601c.
The netpage printer 601b is able to deliver, periodically or on demand, personalized newspapers, magazines, catalogs, brochures and other publications, all printed at high quality as interactive netpages. Unlike a personal computer, the netpage printer is an appliance which can be, for example, wall-mounted adjacent to an area where the morning news is first consumed, such as in a user's kitchen, near a breakfast table, or near the household's point of departure for the day. It also comes in tabletop, desktop, portable and miniature versions. Netpages printed on-demand at their point of consumption combine the ease-of-use of paper with the timeliness and interactivity of an interactive medium. Alternatively, the netpage relay device 601 may be a portable device, such as a mobile phone or PDA, a laptop or desktop computer, or an information appliance connected to a shared display, such as a TV. If the relay device 601 is not a netpage printer 601b which prints netpages digitally and on demand, the netpages may be printed by traditional analog printing presses, using such techniques as offset lithography, flexography, screen printing, relief printing and rotogravure, as well as by digital printing presses, using techniques such as drop-on-demand inkjet, continuous inkjet, dye transfer, and laser printing. As shown in Figure 3, the netpage sensing device 101 interacts with the coded data on a printed netpage 1, or other printed substate such as a label of a product item 251, and communicates, via a short-range radio link 9, the interaction to the relay 601. The relay 601 sends corresponding interaction data to the relevant netpage page server 10 for interpretation. Raw data received from the sensing device 101 may be relayed directly to the page server 10 as interaction data. Alternatively, the interaction data may be encoded in the form of an interaction URI and transmitted to the page server 10 via a user's web browser. Of course, the relay device 601 (e.g. mobile phone) may incorporate a web browser and a display device. Interpretation of the interaction data by the page server 10 may result in direct access to information requested by the user. This information may be sent from the page server 10 to, for example, a user's display device (e.g. a display device associated with the relay device 601). The information sent to the user may be in the form of a webpage constructed by the page server 10 and the webpage may be constructed using information from external web services 200 (e.g. search engines) or local web resources accessible by the page server 10. In some circumstances, the page server 10 may access application computer software running on a netpage application server 13.
Alternatively, and as shown explicitly in Figure 1, a two-step information retrieval process may be employed. Interaction data is sent from the sensing device 101 to the relay device 601 in the usual way. The relay device 601 then sends the interaction data to the page server 10 for interpretation with reference to the relevant page description 5. Then, the page server 10 forms a request (typically in the form of a request URI) and sends this request URI back to the user's relay device 601. A web browser running on the relay device 601 then sends the request URI to a netpage web server 201, which interprets the request. The netpage web server 201 may interact with local web resources and external web services 200 to interpret the request and construct a webpage. Once the webpage has been constructed by the netpage web server 201, it is transmitted to the web browser running on the user's relay device 601, which typically displays the webpage. This system architecture is particulary useful for performing searching via netpages, as described in our earlier US Patent Application No. 11/672,950 filed on February 8, 2007 (the contents of which is incorporated by reference). For example, the request URI may encode search query terms, which are searched via the netpage web server 201.
The netpage relay device 601 can be configured to support any number of sensing devices, and a sensing device can work with any number of netpage relays. In the preferred implementation, each netpage sensing device 101 has a unique identifier. This allows each user to maintain a distinct profile with respect to a netpage page server 10 or application server 13. Digital, on-demand delivery of netpages 1 may be performed by the netpage printer 601b, which exploits the growing availability of broadband Internet access. Netpage publication servers 14 on the netpage network are configured to deliver print- quality publications to netpage printers. Periodical publications are delivered automatically to subscribing netpage printers via pointcasting and multicasting Internet protocols. Personalized publications are filtered and formatted according to individual user profiles.
A netpage pen may be registered with a netpage registration server 11 and linked to one or more payment card accounts. This allows e-commerce payments to be securely authorized using the netpage pen. The netpage registration server compares the signature captured by the netpage pen with a previously registered signature, allowing it to authenticate the user's identity to an e-commerce server. Other biometrics can also be used to verify identity. One version of the netpage pen includes fingerprint scanning, verified in a similar way by the netpage registration server.
NETPAGE SYSTEM ARCHITECTURE Each object model in the system is described using a Unified Modeling Language
(UML) class diagram. A class diagram consists of a set of object classes connected by relationships, and two kinds of relationships are of interest here: associations and generalizations. An association represents some kind of relationship between objects, i.e. between instances of classes. A generalization relates actual classes, and can be understood in the following way: if a class is thought of as the set of all objects of that class, and class A is a generalization of class B, then B is simply a subset of A. The UML does not directly support second-order modelling - i.e. classes of classes.
Each class is drawn as a rectangle labelled with the name of the class. It contains a list of the attributes of the class, separated from the name by a horizontal line, and a list of the operations of the class, separated from the attribute list by a horizontal line. In the class diagrams which follow, however, operations are never modelled.
An association is drawn as a line joining two classes, optionally labelled at either end with the multiplicity of the association. The default multiplicity is one. An asterisk (*) indicates a multiplicity of "many", i.e. zero or more. Each association is optionally labelled with its name, and is also optionally labelled at either end with the role of the corresponding class. An open diamond indicates an aggregation association ("is-part-of '), and is drawn at the aggregator end of the association line.
A generalization relationship ("is-a") is drawn as a solid line joining two classes, with an arrow (in the form of an open triangle) at the generalization end.
When a class diagram is broken up into multiple diagrams, any class which is duplicated is shown with a dashed outline in all but the main diagram which defines it. It is shown with attributes only where it is defined. 1 NETPAGES
Netpages are the foundation on which a netpage network is built. They provide a paper-based user interface to published information and interactive services.
A netpage consists of a printed page (or other surface region) invisibly tagged with references to an online description of the page. The online page description is maintained persistently by the netpage page server 10. The page description describes the visible layout and content of the page, including text, graphics and images. It also describes the input elements on the page, including buttons, hyperlinks, and input fields. A netpage allows markings made with a netpage pen on its surface to be simultaneously captured and processed by the netpage system.
Multiple netpages (for example, those printed by analog printing presses) can share the same page description. However, to allow input through otherwise identical pages to be distinguished, each netpage may be assigned a unique page identifier. This page ID has sufficient precision to distinguish between a very large number of netpages. Each reference to the page description is encoded in a printed tag. The tag identifies the unique page on which it appears, and thereby indirectly identifies the page description. The tag also identifies its own position on the page. Characteristics of the tags are described in more detail below.
Tags are typically printed in infrared-absorptive ink on any substrate which is infrared-reflective, such as ordinary paper, or in infrared fluorescing ink. Near-infrared wavelengths are invisible to the human eye but are easily sensed by a solid-state image sensor with an appropriate filter.
A tag is sensed by a 2D area image sensor in the netpage sensing device, and the tag data is transmitted to the netpage system via the nearest netpage relay device. The pen is wireless and communicates with the netpage relay device via a short-range radio link. Tags are sufficiently small and densely arranged that the sensing device can reliably image at least one tag even on a single click on the page. It is important that the pen recognize the page ID and position on every interaction with the page, since the interaction is stateless. Tags are error-correctably encoded to make them partially tolerant to surface damage.
The netpage page server 10 maintains a unique page instance for each unique printed netpage, allowing it to maintain a distinct set of user-supplied values for input fields in the page description for each printed netpage. The relationship between the page description, the page instance, and the printed netpage is shown in Figure 4. The printed netpage may be part of a printed netpage document 45. The page instance may be associated with both the netpage printer which printed it and, if known, the netpage user who requested it.
2 NETPAGE TAGS
2.1 Tag Data Content
In a preferred form, each tag identifies the region in which it appears, and the location of that tag within the region and an orientation of the tag relative to a substrate on which the tag is printed. A tag may also contain flags which relate to the region as a whole or to the tag. One or more flag bits may, for example, signal a tag sensing device to provide feedback indicative of a function associated with the immediate area of the tag, without the sensing device having to refer to a description of the region. A netpage pen may, for example, illuminate an "active area" LED when in the zone of a hyperlink.
As will be more clearly explained below, in a preferred embodiment, each tag typically contains an easily recognized invariant structure which aids initial detection, and which assists in minimizing the effect of any warp induced by the surface or by the sensing process. The tags preferably tile the entire page, and are sufficiently small and densely arranged that the pen can reliably image at least one tag even on a single click on the page. It is important that the pen recognize the page ID and position on every interaction with the page, since the interaction is stateless.
In a preferred embodiment, the region to which a tag refers coincides with an entire page, and the region ID encoded in the tag is therefore synonymous with the page ID of the page on which the tag appears. In other embodiments, the region to which a tag refers can be an arbitrary subregion of a page or other surface. For example, it can coincide with the zone of an interactive element, in which case the region ID can directly identify the interactive element.
Table 1 - Tag data
Figure imgf000026_0001
Each tag contains 120 bits of information, typically allocated as shown in Table 1. Assuming a maximum tag density of 64 per square inch, a 16-bit tag ID supports a region size of up to 1024 square inches. Larger regions can be mapped continuously without increasing the tag ID precision simply by using abutting regions and maps. The 100-bit region ID allows 2100 (~1030 or a million trillion trillion) different regions to be uniquely identified. 2.2 Tag Data Encoding
The 120 bits of tag data are redundantly encoded using a (15, 5) Reed-Solomon code. This yields 360 encoded bits consisting of 6 codewords of 15 4-bit symbols each. The (15, 5) code allows up to 5 symbol errors to be corrected per codeword, i.e. it is tolerant of a symbol error rate of up to 33% per codeword.
Each 4-bit symbol is represented in a spatially coherent way in the tag, and the symbols of the six codewords are interleaved spatially within the tag. This ensures that a burst error (an error affecting multiple spatially adjacent bits) damages a minimum number of symbols overall and a minimum number of symbols in any one codeword, thus maximising the likelihood that the burst error can be fully corrected.
Any suitable error-correcting code code can be used in place of a (15, 5) Reed- Solomon code, for example a Reed-Solomon code with more or less redundancy, with the same or different symbol and codeword sizes; another block code; or a different kind of code, such as a convolutional code (see, for example, Stephen B. Wicker, Error Control Systems for Digital Communication and Storage, Prentice-Hall 1995, the contents of which a herein incorporated by cross-reference). 2.3 Physical Tag Structure
The physical representation of the tag, shown in Figure 5a, includes fixed target structures 15, 16, 17 and variable data areas 18. The fixed target structures allow a sensing device such as the netpage pen to detect the tag and infer its three-dimensional orientation relative to the sensor. The data areas contain representations of the individual bits of the encoded tag data.
To achieve proper tag reproduction, the tag is rendered at a resolution of 256x256 dots. When printed at 1600 dots per inch this yields a tag with a diameter of about 4 mm.
At this resolution the tag is designed to be surrounded by a "quiet area" of radius 16 dots. Since the quiet area is also contributed by adjacent tags, it only adds 16 dots to the effective diameter of the tag.
The tag may include a plurality of target structures. A detection ring 15 allows the sensing device to initially detect the tag. The ring is easy to detect because it is rotationally invariant and because a simple correction of its aspect ratio removes most of the effects of perspective distortion. An orientation axis 16 allows the sensing device to determine the approximate planar orientation of the tag due to the yaw of the sensor. The orientation axis is skewed to yield a unique orientation. Four perspective targets 17 allow the sensing device to infer an accurate two-dimensional perspective transform of the tag and hence an accurate three-dimensional position and orientation of the tag relative to the sensor.
All target structures are redundantly large to improve their immunity to noise.
In order to support "single-click" interaction with a tagged region via a sensing device, the sensing device must be able to see at least one entire tag in its field of view no matter where in the region or at what orientation it is positioned. The required diameter of the field of view of the sensing device is therefore a function of the size and spacing of the tags.
Thus, if a tag has a circular shape, the minimum diameter of the sensor field of view is obtained when the tags are tiled on a equilateral triangular grid, as shown in Figure 5b. 2.4 Tag Image Processing and Decoding
The tag image processing and decoding performed by a sensing device such as the netpage pen is shown in Figure 7. While a captured image is being acquired from the image sensor, the dynamic range of the image is determined (at 20). The center of the range is then chosen as the binary threshold for the image 21. The image is then thresholded and segmented into connected pixel regions (i.e. shapes 23) (at 22). Shapes which are too small to represent tag target structures are discarded. The size and centroid of each shape is also computed. Binary shape moments 25 are then computed (at 24) for each shape, and these provide the basis for subsequently locating target structures. Central shape moments are by their nature invariant of position, and can be easily made invariant of scale, aspect ratio and rotation.
The ring target structure 15 is the first to be located (at 26). A ring has the advantage of being very well behaved when perspective-distorted. Matching proceeds by aspect-normalizing and rotation-normalizing each shape's moments. Once its second-order moments are normalized the ring is easy to recognize even if the perspective distortion was significant. The ring's original aspect and rotation 27 together provide a useful approximation of the perspective transform. The axis target structure 16 is the next to be located (at 28). Matching proceeds by applying the ring's normalizations to each shape's moments, and rotation-normalizing the resulting moments. Once its second-order moments are normalized the axis target is easily recognized. Note that one third order moment is required to disambiguate the two possible orientations of the axis. The shape is deliberately skewed to one side to make this possible. Note also that it is only possible to rotation-normalize the axis target after it has had the ring's normalizations applied, since the perspective distortion can hide the axis target's axis. The axis target's original rotation provides a useful approximation of the tag's rotation due to pen yaw 29.
The four perspective target structures 17 are the last to be located (at 30). Good estimates of their positions are computed based on their known spatial relationships to the ring and axis targets, the aspect and rotation of the ring, and the rotation of the axis. Matching proceeds by applying the ring's normalizations to each shape's moments. Once their second-order moments are normalized the circular perspective targets are easy to recognize, and the target closest to each estimated position is taken as a match. The original centroids of the four perspective targets are then taken to be the perspective- distorted corners 31 of a square of known size in tag space, and an eight-degree-of- freedom perspective transform 33 is inferred (at 32) based on solving the well-understood equations relating the four tag-space and image-space point pairs (see Heckbert, P., Fundamentals of Texture Mapping and Image Warping, Masters Thesis, Dept. of EECS, U. of California at Berkeley, Technical Report No. UCB/CSD 89/516, June 1989, the contents of which are herein incorporated by cross-reference).
The inferred tag-space to image-space perspective transform is used to project (at 36) each known data bit position in tag space into image space where the real-valued position is used to bilinearly interpolate (at 36) the four relevant adjacent pixels in the input image. The previously computed image threshold 21 is used to threshold the result to produce the final bit value 37.
Once all 360 data bits 37 have been obtained in this way, each of the six 60-bit Reed-Solomon codewords is decoded (at 38) to yield 20 decoded bits 39, or 120 decoded bits in total. Note that the codeword symbols are sampled in codeword order, so that codewords are implicitly de-interleaved during the sampling process.
The ring target 15 is only sought in a subarea of the image whose relationship to the image guarantees that the ring, if found, is part of a complete tag. If a complete tag is not found and successfully decoded, then no pen position is recorded for the current frame.
Given adequate processing power and ideally a non-minimal field of view 193, an alternative strategy involves seeking another tag in the current image.
The obtained tag data indicates the identity of the region containing the tag and the position of the tag within the region. An accurate position 35 of the pen nib in the region, as well as the overall orientation 35 of the pen, is then inferred (at 34) from the perspective transform 33 observed on the tag and the known spatial relationship between the image sensor (containing the optical axis of the pen) and the nib (which tyically contains the physical axis of the pen). The image sensor is usually offset from the nib.
2.5 Alternative Tag Structures
The tag structure described above is designed to support the tagging of non- planar surfaces where a regular tiling of tags may not be possible. In the more usual case of planar surfaces where a regular tiling of tags is possible, i.e. surfaces such as sheets of paper and the like, more efficient tag structures can be used which exploit the regular nature of the tiling.
Figure 6a shows a square tag 4 with four perspective targets 17. The tag represents sixty 4-bit Reed-Solomon symbols 47, for a total of 240 bits. The tag represents each one bit as a dot 48, and each zero bit by the absence of the corresponding dot. The perspective targets are designed to be shared between adjacent tags, as shown in Figures 6b and 6c. Figure 6b shows a square tiling of 16 tags and the corresponding minimum field of view 193, which must span the diagonals of two tags. Figure 6c shows a square tiling of nine tags, containing all one bits for illustration purposes. Using a (15, 7) Reed-Solomon code, 112 bits of tag data are redundantly encoded to produce 240 encoded bits. The four codewords are interleaved spatially within the tag to maximize resilience to burst errors. Assuming a 16-bit tag ID as before, this allows a region ID of up to 92 bits.
The data-bearing dots 48 of the tag are designed to not overlap their neighbors, so that groups of tags cannot produce structures which resemble targets. This also saves ink. The perspective targets therefore allow detection of the tag, so further targets are not required. Tag image processing proceeds as described in section 1.2.4 above, with the exception that steps 26 and 28 are omitted.
Although the tag may contain an orientation feature to allow disambiguation of the four possible orientations of the tag relative to the sensor, it is also possible to embed orientation data in the tag data. For example, the four codewords can be arranged so that each tag orientation contains one codeword placed at that orientation, as shown in Figure 6d, where each symbol is labelled with the number of its codeword (1-4) and the position of the symbol within the codeword (A-O). Tag decoding then consists of decoding one codeword at each orientation. Each codeword can either contain a single bit indicating whether it is the first codeword, or two bits indicating which codeword it is. The latter approach has the advantage that if, say, the data content of only one codeword is required, then at most two codewords need to be decoded to obtain the desired data. This may be the case if the region ID is not expected to change within a stroke and is thus only decoded at the start of a stroke. Within a stroke only the codeword containing the tag ID is then desired. Furthermore, since the rotation of the sensing device changes slowly and predictably within a stroke, only one codeword typically needs to be decoded per frame.
It is possible to dispense with perspective targets altogether and instead rely on the data representation being self-registering. In this case each bit value (or multi-bit value) is typically represented by an explicit glyph, i.e. no bit value is represented by the absence of a glyph. This ensures that the data grid is well-populated, and thus allows the grid to be reliably identified and its perspective distortion detected and subsequently corrected during data sampling. To allow tag boundaries to be detected, each tag data must contain a marker pattern, and these must be redundantly encoded to allow reliable detection. The overhead of such marker patterns is similar to the overhead of explicit perspective targets. One such scheme uses dots positioned a various points relative to grid vertices to represent different glyphs and hence different multi-bit values (see Anoto Technology Description, Anoto April 2000).
Additional tag structures are disclosed in US Patent 6929186 ("Orientation- indicating machine-readable coded data") filed by the applicant or assignee of the present invention and the contents of which is herein incorporated by reference.
2.6 Tag Map
Decoding a tag typically results in a region ID, a tag ID, and a tag-relative pen transform. Before the tag ID and the tag-relative pen location can be translated into an absolute location within the tagged region, the location of the tag within the region must be known. This is given by a tag map, a function which maps each tag ID in a tagged region to a corresponding location. The tag map class diagram is shown in Figure 22, as part of the netpage printer class diagram.
A tag map reflects the scheme used to tile the surface region with tags, and this can vary according to surface type. When multiple tagged regions share the same tiling scheme and the same tag numbering scheme, they can also share the same tag map. The tag map for a region must be retrievable via the region ID. Thus, given a region ID, a tag ID and a pen transform, the tag map can be retrieved, the tag ID can be translated into an absolute tag location within the region, and the tag-relative pen location can be added to the tag location to yield an absolute pen location within the region.
The tag ID may have a structure which assists translation through the tag map. It may, for example, encode cartesian (x-y) coordinates or polar coordinates, depending on the surface type on which it appears. The tag ID structure is dictated by and known to the tag map, and tag IDs associated with different tag maps may therefore have different structures.
With the tagging scheme described above, the tags usually function in cooperation with associated visual elements on the netpage. These function as user interactive elements in that a user can interact with the printed page using an appropriate sensing device in order for tag data to be read by the sensing device and for an appropriate response to be generated in the netpage system. Additionally (or alternatively), decoding a tag may be used to provide orientation data indicative of the yaw of the pen relative to the surface. The orientation data may be determined using, for example, the orientation axis 16 described above (Section 2.3) or orientation data embedded in the tag data (Section 2.5).
3 DOCUMENT AND PAGE DESCRIPTIONS
A preferred embodiment of a document and page description class diagram is shown in Figures 12 and 13.
In the netpage system a document is described at three levels. At the most abstract level the document 836 has a hierarchical structure whose terminal elements 839 are associated with content objects 840 such as text objects, text style objects, image objects, etc. Once the document is printed on a printer with a particular page size, the document is paginated and otherwise formatted. Formatted terminal elements 835 will in some cases be associated with content objects which are different from those associated with their corresponding terminal elements, particularly where the content objects are style-related. Each printed instance of a document and page is also described separately, to allow input captured through a particular page instance 830 to be recorded separately from input captured through other instances of the same page description.
The presence of the most abstract document description on the page server allows a a copy of a document to be printed without being forced to accept the source document's specific format. The user or a printing press may be requesting a copy for a printer with a different page size, for example. Conversely, the presence of the formatted document description on the page server allows the page server to efficiently interpret user actions on a particular printed page. A formatted document 834 consists of a set of formatted page descriptions 5, each of which consists of a set of formatted terminal elements 835. Each formatted element has a spatial extent or zone 58 on the page. This defines the active area of input elements such as hyperlinks and input fields.
A document instance 831 corresponds to a formatted document 834. It consists of a set of page instances 830, each of which corresponds to a page description 5 of the formatted document. Each page instance 830 describes a single unique printed netpage 1, and records the page ID 50 of the netpage. A page instance is not part of a document instance if it represents a copy of a page requested in isolation. A page instance consists of a set of terminal element instances 832. An element instance only exists if it records instance-specific information. Thus, a hyperlink instance exists for a hyperlink element because it records a transaction ID 55 which is specific to the page instance, and a field instance exists for a field element because it records input specific to the page instance. An element instance does not exist, however, for static elements such as textflows.
A terminal element 839 can be a visual element or an input element. A visual element can be a static element 843 or a dynamic element 846. An input element may be, for example, a hyperlink element 844 or a field element 845, as shown in Figure 14. Other types of input element are of course possible, such a input elements, which select a particular mode of the pen 101.
A page instance has a background field 833 which is used to record any digital ink captured on the page which does not apply to a specific input element.
In the preferred form of the invention, a tag map 811 is associated with each page instance to allow tags on the page to be translated into locations on the page.
4 THE NETPAGE NETWORK
In one embodiment, a netpage network consists of a distributed set of netpage page servers 10, netpage registration servers 11, netpage ID servers 12, netpage application servers 13, and netpage relay devices 601 connected via a network 19 such as the Internet, as shown in Figure 3.
The netpage registration server 11 is a server which records relationships between users, pens, printers and applications, and thereby authorizes various network activities. It authenticates users and acts as a signing proxy on behalf of authenticated users in application transactions. It also provides handwriting recognition services. As described above, a netpage page server 10 maintains persistent information about page descriptions and page instances. The netpage network includes any number of page servers, each handling a subset of page instances. Since a page server also maintains user input values for each page instance, clients such as netpage relays 601 send netpage input directly to the appropriate page server. The page server interprets any such input relative to the description of the corresponding page.
A netpage ID server 12 allocates document IDs 51 on demand, and provides load-balancing of page servers via its ID allocation scheme. A netpage relay 601 uses the Internet Distributed Name System (DNS), or similar, to resolve a netpage page ID 50 into the network address of the netpage page server 10 handling the corresponding page instance.
A netpage application server 13 is a server which hosts interactive netpage applications.
Netpage servers can be hosted on a variety of network server platforms from manufacturers such as IBM, Hewlett-Packard, and Sun. Multiple netpage servers can run concurrently on a single host, and a single server can be distributed over a number of hosts. Some or all of the functionality provided by netpage servers, and in particular the functionality provided by the ID server and the page server, can also be provided directly in a netpage appliance such as a netpage printer, in a computer workstation, or on a local network.
5 THE NETPAGE PEN The active sensing device of the netpage system may take the form of a clicker
(for clicking on a specific position on a surface), a pointer having a stylus (for pointing or gesturing on a surface using pointer strokes), or a pen having a marking nib (for marking a surface with ink when pointing, gesturing or writing on the surface). A pen 101 is described herein, although it will be appreciated that clickers and pointers may have similar features. The pen 101 uses its embedded controller 134 to capture and decode netpage tags from a page via an image sensor. The image sensor is a solid-state device provided with an appropriate filter to permit sensing at only near-infrared wavelengths. As described in more detail below, the system is able to sense when the nib is in contact with the surface, and the pen is able to sense tags at a sufficient rate to capture human handwriting (i.e. at 200 dpi or greater and 100 Hz or faster). Information captured by the pen may be encrypted and wirelessly transmitted to the printer (or base station), the printer or base station interpreting the data with respect to the (known) page structure.
The preferred embodiment of the netpage pen 101 operates both as a normal marking ink pen and as a non-marking stylus (i.e. as a pointer). The marking aspect, however, is not necessary for using the netpage system as a browsing system, such as when it is used as an Internet interface. Each netpage pen is registered with the netpage system and has a unique pen ID 61. Figure 11 shows the netpage pen class diagram, reflecting pen- related information maintained by a registration server 11 on the netpage network. When the nib is in contact with a netpage, the pen determines its position and orientation relative to the page. The nib is attached to a force sensor, and the force on the nib is interpreted relative to a threshold to indicate whether the pen is '"up" or "down". This allows an interactive element on the page to be 'clicked' by pressing with the pen nib, in order to request, say, information from a network. Furthermore, the force may be captured as a continuous value to allow, say, the full dynamics of a signature to be verified.
The pen determines the position and orientation of its nib on the netpage by imaging, in the infrared spectrum, an area 193 of the page in the vicinity of the nib. It decodes the nearest tag and computes the position of the nib relative to the tag from the observed perspective distortion on the imaged tag and the known geometry of the pen optics. Although the position resolution of the tag may be low, because the tag density on the page is inversely proportional to the tag size, the adjusted position resolution is quite high, exceeding the minimum resolution required for accurate handwriting recognition. Pen actions relative to a netpage are captured as a series of strokes. A stroke consists of a sequence of time-stamped pen positions on the page, initiated by a pen-down event and completed by the subsequent pen-up event. A stroke is also tagged with the page ID 50 of the netpage whenever the page ID changes, which, under normal circumstances, is at the commencement of the stroke. Each netpage pen has a current selection 826 associated with it, allowing the user to perform copy and paste operations etc. The selection is timestamped to allow the system to discard it after a defined time period. The current selection describes a region of a page instance. It consists of the most recent digital ink stroke captured through the pen relative to the background area of the page. It is interpreted in an application-specific manner once it is submitted to an application via a selection hyperlink activation.
Each pen has a current nib 824. This is the nib last notified by the pen to the system. In the case of the default netpage pen described above, either the marking black ink nib or the non-marking stylus nib is current. Each pen also has a current nib style 825. This is the nib style last associated with the pen by an application, e.g. in response to the user selecting a color from a palette. The default nib style is the nib style associated with the current nib. Strokes captured through a pen are tagged with the current nib style. When the strokes are subsequently reproduced, they are reproduced in the nib style with which they are tagged. The pen 101 may have one or more buttons 209. As described in US Application
No. 11/672,950 filed on February 8, 2007 (the contents of which is herein incorporated by reference), the button(s) may be used to determine a mode or behavior of the pen, which, in turn, determines how a stroke or, more generally, interaction data is interpreted by the page server 10.
Whenever the pen is within range of a relay device 601 with which it can communicate, the pen slowly flashes its "online" LED. When the pen fails to decode a stroke relative to the page, it momentarily activates its "error" LED. When the pen succeeds in decoding a stroke relative to the page, it momentarily activates its "ok" LED. A sequence of captured strokes is referred to as digital ink. Digital ink forms the basis for the digital exchange of drawings and handwriting, for online recognition of handwriting, and for online verification of signatures.
The pen is typically wireless and transmits digital ink to the relay device 601 via a short-range radio link. The transmitted digital ink is encrypted for privacy and security and packetized for efficient transmission, but is always flushed on a pen-up event to ensure timely handling in the printer.
When the pen is out-of-range of a relay device 601 it buffers digital ink in internal memory, which has a capacity of over ten minutes of continuous handwriting. When the pen is once again within range of a relay device, it transfers any buffered digital ink.
A pen can be registered with any number of relay devices, but because all state data resides in netpages both on paper and on the network, it is largely immaterial which relay device a pen is communicating with at any particular time.
One embodiment of the pen is described in greater detail in Section 7 below, with reference to Figures 8 to 10.
6 NETPAGE INTERACTION
The netpage relay device 601 receives data relating to a stroke from the pen 101 when the pen is used to interact with a netpage 1. The coded data 3 of the tags 4 is read by the pen when it is used to execute a movement, such as a stroke. The data allows the identity of the particular page to be determined and an indication of the positioning of the pen relative to the page to be obtained. Interaction data, typically comprising the page ID 50 and at least one position of the pen, is transmitted to the relay device 601, where it resolves, via the DNS, the page ID 50 of the stroke into the network address of the netpage page server 10 which maintains the corresponding page instance 830. It then transmits the stroke to the page server. If the page was recently identified in an earlier stroke, then the relay device may already have the address of the relevant page server in its cache. Each netpage consists of a compact page layout maintained persistently by a netpage page server (see below). The page layout refers to objects such as images, fonts and pieces of text, typically stored elsewhere on the netpage network.
When the page server receives the stroke from the pen, it retrieves the page description to which the stroke applies, and determines which element of the page description the stroke intersects. It is then able to interpret the stroke in the context of the type of the relevant element.
A "click" is a stroke where the distance and time between the pen down position and the subsequent pen up position are both less than some small maximum. An object which is activated by a click typically requires a click to be activated, and accordingly, a longer stroke is ignored. The failure of a pen action, such as a "sloppy" click, to register may be indicated by the lack of response from the pen's "ok" LED.
Hyperlinks and form fields are two kinds of input elements, which may be contained in a netpage page description. Input through a form field can also trigger the activation of an associated hyperlink. These types of input elements are described in further detail in the above-identified patents and patent applications, the contents of which are herein incorporated by cross-reference.
7 DETAILED NETPAGE PEN DESCRIPTION
7.1 PEN MECHANICS Referring to Figures 8 and 9, the pen, generally designated by reference numeral
101, includes a housing 102 in the form of a plastics moulding having walls 103 defining an interior space 104 for mounting the pen components. Mode selector buttons 209 are provided on the housing 102. The pen top 105 is in operation rotatably mounted at one end 106 of the housing 102. A semi-transparent cover 107 is secured to the opposite end 108 of the housing 102. The cover 107 is also of moulded plastics, and is formed from semi- transparent material in order to enable the user to view the status of the LED mounted within the housing 102. The cover 107 includes a main part 109 which substantially surrounds the end 108 of the housing 102 and a projecting portion 110 which projects back from the main part 109 and fits within a corresponding slot 111 formed in the walls 103 of the housing 102. A radio antenna 112 is mounted behind the projecting portion 110, within the housing 102. Screw threads 113 surrounding an aperture 113A on the cover 107 are arranged to receive a metal end piece 114, including corresponding screw threads 115. The metal end piece 114 is removable to enable ink cartridge replacement.
Also mounted within the cover 107 is a tri-color status LED 116 on a flex PCB 117. The antenna 112 is also mounted on the flex PCB 117. The status LED 116 is mounted at the top of the pen 101 for good all-around visibility.
The pen can operate both as a normal marking ink pen and as a non-marking stylus. An ink pen cartridge 118 with nib 119 and a stylus 120 with stylus nib 121 are mounted side by side within the housing 102. Either the ink cartridge nib 119 or the stylus nib 121 can be brought forward through open end 122 of the metal end piece 114, by rotation of the pen top 105. Respective slider blocks 123 and 124 are mounted to the ink cartridge 118 and stylus 120, respectively. A rotatable cam barrel 125 is secured to the pen top 105 in operation and arranged to rotate therewith. The cam barrel 125 includes a cam 126 in the form of a slot within the walls 181 of the cam barrel. Cam followers 127 and 128 projecting from slider blocks 123 and 124 fit within the cam slot 126. On rotation of the cam barrel 125, the slider blocks 123 or 124 move relative to each other to project either the pen nib 119 or stylus nib 121 out through the hole 122 in the metal end piece 114. The pen 101 has three states of operation. By turning the top 105 through 90° steps, the three states are:
Stylus 120 nib 121 out;
• Ink cartridge 118 nib 119 out; and
• Neither ink cartridge 118 nib 119 out nor stylus 120 nib 121 out. A second flex PCB 129, is mounted on an electronics chassis 130 which sits within the housing 102. The second flex PCB 129 mounts an infrared LED 131 for providing infrared radiation for projection onto the surface. An image sensor 132 is provided mounted on the second flex PCB 129 for receiving reflected radiation from the surface. The second flex PCB 129 also mounts a radio frequency chip 133, which includes an RF transmitter and RF receiver, and a controller chip 134 for controlling operation of the pen 101. An optics block 135 (formed from moulded clear plastics) sits within the cover 107 and projects an infrared beam onto the surface and receives images onto the image sensor 132. Power supply wires 136 connect the components on the second flex PCB 129 to battery contacts 137 which are mounted within the cam barrel 125. A terminal 138 connects to the battery contacts 137 and the cam barrel 125. A three volt rechargeable battery 139 sits within the cam barrel 125 in contact with the battery contacts. An induction charging coil 140 is mounted about the second flex PCB 129 to enable recharging of the battery 139 via induction. The second flex PCB 129 also mounts an infrared LED 143 and infrared photodiode 144 for detecting displacement in the cam barrel 125 when either the stylus 120 or the ink cartridge 118 is used for writing, in order to enable a determination of the force being applied to the surface by the pen nib 119 or stylus nib 121. The IR photodiode 144 detects light from the IR LED 143 via reflectors (not shown) mounted on the slider blocks 123 and 124.
Rubber grip pads 141 and 142 are provided towards the end 108 of the housing 102 to assist gripping the pen 101, and top 105 also includes a clip 142 for clipping the pen 101 to a pocket.
7.2 PEN CONTROLLER
The pen 101 is arranged to determine the position of its nib (stylus nib 121 or ink cartridge nib 119) by imaging, in the infrared spectrum, an area of the surface in the vicinity of the nib. It records the location data from the nearest location tag, and is arranged to calculate the distance of the nib 121 or 119 from the location tab utilising optics 135 and controller chip 134. The controller chip 134 calculates the orientation (yaw) of the pen using an orientation indicator in the imaged tag, and the nib-to-tag distance from the perspective distortion observed on the imaged tag.
Utilising the RF chip 133 and antenna 112 the pen 101 can transmit the digital ink data (which is encrypted for security and packaged for efficient transmission) to the computing system.
When the pen is in range of a relay device 601, the digital ink data is transmitted as it is formed. When the pen 101 moves out of range, digital ink data is buffered within the pen 101 (the pen 101 circuitry includes a buffer arranged to store digital ink data for approximately 12 minutes of the pen motion on the surface) and can be transmitted later. In Applicant's US Patent No. 6,870,966, the contents of which is incorporated herein by reference, a pen 101 having an interchangeable ink cartridge nib and stylus nib was described. Accordingly, and referring to Figure 27, when the pen 101 connects to the computing system, the controller 134 notifies the system of the pen ID, nib ID 175, current absolute time 176, and the last absolute time it obtained from the system prior to going offline. The pen ID allows the computing system to identify the pen when there is more than one pen being operated with the computing system.
The nib ID allows the computing system to identify which nib (stylus nib 121 or ink cartridge nib 119) is presently being used. The computing system can vary its operation depending upon which nib is being used. For example, if the ink cartridge nib 119 is being used the computing system may defer producing feedback output because immediate feedback is provided by the ink markings made on the surface. Where the stylus nib 121 is being used, the computing system may produce immediate feedback output. Since a user may change the nib 119, 121 between one stroke and the next, the pen 101 optionally records a nib ID for a stroke 175. This becomes the nib ID implicitly associated with later strokes.
Cartridges having particular nib characteristics may be interchangeable in the pen. The pen controller 134 may interrogate a cartridge to obtain the nib ID 175 of the cartridge. The nib ID 175 may be stored in a ROM or a barcode on the cartridge. The controller 134 notifies the system of the nib ID whenever it changes. The system is thereby able to determine the characteristics of the nib used to produce a stroke 175, and is thereby subsequently able to reproduce the characteristics of the stroke itself.
The controller chip 134 is mounted on the second flex PCB 129 in the pen 101. Figure 10 is a block diagram illustrating in more detail the architecture of the controller chip 134. Figure 10 also shows representations of the RF chip 133, the image sensor 132, the tri-color status LED 116, the IR illumination LED 131, the IR force sensor LED 143, and the force sensor photodiode 144.
The pen controller chip 134 includes a controlling processor 145. Bus 146 enables the exchange of data between components of the controller chip 134. Flash memory 147 and a 512 KB DRAM 148 are also included. An analog-to-digital converter
149 is arranged to convert the analog signal from the force sensor photodiode 144 to a digital signal.
An image sensor interface 152 interfaces with the image sensor 132. A transceiver controller 153 and base band circuit 154 are also included to interface with the RF chip 133 which includes an RF circuit 155 and RF resonators and inductors 156 connected to the antenna 112.
The controlling processor 145 captures and decodes location data from tags from the surface via the image sensor 132, monitors the force sensor photodiode 144, controls the LEDs 116, 131 and 143, and handles short-range radio communication via the radio transceiver 153. It is a medium-performance (-4OMHz) general-purpose RISC processor.
The processor 145, digital transceiver components (transceiver controller 153 and baseband circuit 154), image sensor interface 152, flash memory 147 and 512KB DRAM 148 are integrated in a single controller ASIC. Analog RF components (RF circuit 155 and RF resonators and inductors 156) are provided in the separate RF chip.
The image sensor is a 215x215 pixel CCD (such a sensor is produced by
Matsushita Electronic Corporation, and is described in a paper by Itakura, K T Nobusada, N Okusenya, R Nagayoshi, and M Ozaki, "A lmm 50k-Pixel IT CCD Image Sensor for
Miniature Camera System", IEEE Transactions on Electronic Devices, Volt 47, number 1,
January 2000, which is incorporated herein by reference) with an IR filter.
The controller ASIC 134 enters a quiescent state after a period of inactivity when the pen 101 is not in contact with a surface. It incorporates a dedicated circuit 150 which monitors the force sensor photodiode 144 and wakes up the controller 134 via the power manager 151 on a pen-down event.
The radio transceiver communicates in the unlicensed 900MHz band normally used by cordless telephones, or alternatively in the unlicensed 2.4GHz industrial, scientific and medical (ISM) band, and uses frequency hopping and collision detection to provide interference-free communication.
In an alternative embodiment, the pen incorporates an Infrared Data Association (IrDA) interface for short-range communication with a base station or netpage printer.
7.3 ALTERNATIVE MOTION SENSOR In a further embodiment, the pen 101 includes a pair of orthogonal accelerometers mounted in the normal plane of the pen 101 axis. The accelerometers 190 are shown in Figures 9 and 10 in ghost outline.
The provision of the accelerometers enables this embodiment of the pen 101 to sense motion without reference to surface location tags. Each location tag ID can then identify an object of interest rather than a position on the surface. For example, if the object is a user interface input element (e.g. a command button), then the tag ID of each location tag within the area of the input element can directly identify the input element.
The acceleration measured by the accelerometers in each of the x and y directions is integrated with respect to time to produce an instantaneous velocity and position.
Since the starting position of the stroke may not be known, only relative positions within a stroke are calculated. Although position integration accumulates errors in the sensed acceleration, accelerometers typically have high resolution, and the time duration of a stroke, over which errors accumulate, is short.
It will be appreciated that a number of alternative (or additional) motion sensors may be employed in a Netpage pen 101. These typically either measure absolute displacement or relative displacement. For example, an optical mouse that measures displacement relative to an external grid (see US 4,390,873 and US 4,521,772) measures absolute displacement, whereas a mechanical mouse that measures displacement via the movement of a wheel or ball in contact with the surface (see US 3,541,541 and US 4,464,652) measures relative displacement because measurement errors accumulate. An optical mouse that measures displacement relative to surface texture (see US 6,631,218, US 6,281,882, US 6,297,513 and US 4,794,384), measures relative displacement for the same reason. Motion sensors based on point interferometry (see US 6,246,482) or acceleration (see US 4,787,051) also measure relative displacement. The contents of all US Patents identified in the preceding paragraph relating to motion sensors are herein incorporated by reference.
7.4 GUI C(^TROL
As discussed in US Application No. 11/672,950 filed on February 8, 2007 (the contents of which is herein incorporated by reference), a Netpage pen 101 can be used to generate cursor control commands (i.e. typically mouse events) to allow seamless transitions between paper interactions and on-screen interactions. A computer system associated with the display device may receive cursor control data (in the form of relative motion data) directly from the pen 101, with the pen performing the necessary processing to generate the cursor control data from the interaction data. Alternatively, the computer system may receive interaction data as usual from the pen 101, and then generate cursor control commands for an associated display device. Alternatively, the computer system may be a remote server which receives interaction data from the Netpage pen 101 and transmits cursor control commands to a display device near the user (e.g. mobile phone). Any of these system architectures may support cursor control, although generation of cursor control commands in the pen is generally preferred.
As discussed in US Application No. 11/672,950, a cursor control behaviour can be selected in various ways, including via a momentary or persistent mode switch. Alternatively, cursor control behaviour may also be automatically selected in the absence of a Netpage tag pattern, if the Netpage pen 101 incorporates a motion sensor that functions in the absence of a tag pattern.
When positions generated by a Netpage pen 101 are intrinsically absolute, such as when generated at least partially with reference to a Netpage tag pattern, then such positions can be trivially converted into absolute cursor control commands. The extent of the physical surface with which the sensing device is interacting is ideally mapped to the extent of a display device for the purposes of translating sensing device positions into cursor control commands.
However, cursor control commands commonly specify changes in position rather than absolute positions - i.e. they are relative position changes. Absolute positions generated by a Netpage pen 101 by reading tags 4 may be trivially converted into relative cursor control commands. The relative scale of physical displacements of the Netpage reader and corresponding screen displacements can be specified as a matter of user preference, as well as whether the mapping is absolute or relative. Although relative displacements of the pen 101 may be readily calculated from absolute motion data, the implementation of cursor control behaviour from absolute positions is problematic in practice. In order for cursor control behaviour to appear 'natural' from a user's perspective, cursor movements on a screen should always follow substantially the movement of the pen 101 i.e. a left pen movement moves the on-screen cursor to the left, an up pen movement moves the on-screen cursor upwards etc. If a surface is orientated in the same way as a displayed page, then on-screen cursor movements naturally reflect movement of the pen 101. However, if the surface is upside down, then the on-screen cursor movements will be confusing and unnatural for the user i.e. a left pen movement would move the on-screen cursor to the right. Since users are not accustomed to orienting their traditional mousepads in a certain way, they equally would not expect to orient their Netpage-tagged surface in a certain way in order to invoke natural cursor-control behaviour. In other words, it is desirable that the surface and pen 101 should invoke natural mouse behaviour irrespective of how the surface is oriented.
This problem is solved by making used of the orientation information contained in the Netpage tags 4. As explained above in Sections 2.3 to 2.5, the yaw of the pen 101 relative to the surface may be calculated by making use of this orientation information. If orientation data as well as absolute motion data is received by a computer system, then the computer system can determine movement of the pen 101 relative to itself by taking into account the yaw of the pen relative to the surface. Hence, movement of the pen, from a user's perspective, is substantially translated into a corresponding on-screen cursor movement. In this way, the pen 101 may be used to control naturally the movement of the cursor, irrespective of the orientation of the surface.
Of course, when positions generated by a Netpage pen 101 are intrinsically relative, such as when they are generated using a relative motion sensing mechanism (e.g. accelerometers, mechanical mouse, optical mouse etc), then they map naturally to relative cursor control commands, again with a (potentially user-specified) scale factor.
Cursor control is a subset of graphical user interface (GUI) input and control in general, including functions such as:
•scrolling »web browser back'forward
•page up/down etc.
•cut/copy/paste
•tab between GUI applications
•launch specific GUI application (word processor, e-mail, web browser, etc.) 'volume control (up/down/mute)
•log off, sleep
•keyboard entry in general
Scrolling can be supported in the conventional way via a scroll wheel on the Netpage pen 101. It can also be provided implicitly as part of a cursor control behaviour, when motion sensing at least partially occurs with reference to a Netpage tag pattern, by reserving part of the extent of each printed Netpage tag pattern as a scroll region. For example, the right-hand couple of inches of each printed Netpage might be reserved for vertical scrolling, and the bottom couple of inches might be reserved for horizontal scrolling. This scroll region may be active when it is determined that the pen 101 is operating in a cursor mode (for example, by a mode switch on the pen).
Scrolling, like cursor control, can be absolute or relative, and can be specified as a matter of user preference. As an alternative to separate vertical and horizontal scrolling areas, the dominant direction of the user's scroll gesture within a single scroll region, relative to the orientation of the tag pattern, can also be used to distinguish vertical from horizontal scrolling. In order to allow diagonal scrolling, a threshold can be imposed on the vertical and horizontal components of the user's gesture to prevent inadvertent diagonal scrolling. As an alternative to reserving a scroll region, one or two scroll mode selection switches can be provided on the pen 101.
Figure 15 shows cursor control and scrolling functions mapped onto an arbitrary Netpage tagged page. The functions are only operative when the Netpage pen's cursor control behaviour is selected. The user readily learns from experience that if the pen's cursor control behaviour is selected then cursor control and scrolling actions may be performed via certain regions of any Netpage, even though there may be no explicit visual indication of this functionality in the visible content of the page. For example, scroll regions along a righthand and bottom edge region may be standard for any Netpage. These scroll regions would be active only in the cursor mode.
Figure 16 shows cursor control and scrolling functions explicitly mapped onto a page with a matching visual layout. Hence, the page shown in Figure 16 is a dedicated cursor control page, specifically tailored for GUI control, including cursor control and scrolling actions. Figure 17 shows cursor control, scrolling functions and keyboard keys explicitly mapped onto a page with a matching visual layout. In the explicitly mapped cases the functions are self-selecting, and so can be operative in any pen behaviour. Any function or key input can also be generated via a suitable Netpage tagged surface, either via tags specifically coded to indicate corresponding functions or keys, or via a page description that indicates corresponding functions or keys in the usual way. The former has the advantage that the input can be identified without consulting the page description 5. This allows the user's relay device 601, which may commonly be the user's display device, to capture such input without recourse to the Netpage server 10. In the latter case any function or key input generated by the server with reference to the page description can be routed to the user's display device via the netpage architecture (see Figure 1 and US Patent Publication No. 2006/025175, the contents of which is herein incorporated by reference). This has the advantage that it supports a display device that is separate from the relay device.
Printed controls can also be provided for selecting one-shot or persistent modes, such as a scrolling mode. Printed controls may be selected via a typical netpage interaction, whereby a user selects an interactive control element on a page, and this interaction is interpreted as a mode selection in the computer system via the page description.
The present invention has been described with reference to a preferred embodiment and number of specific alternative embodiments. However, it will be appreciated by those skilled in the relevant fields that a number of other embodiments, differing from those specifically described, will also fall within the spirit and scope of the present invention. Accordingly, it will be understood that the invention is not intended to be limited to the specific embodiments described in the present specification, including documents incorporated by cross-reference as appropriate. The scope of the invention is only limited by the attached claims.

Claims

1. A system for controlling movement of a cursor on a display device, the system comprising: a substrate having a position-coding pattern disposed on or in a surface thereof; a sensing device comprising: an image sensor for optically imaging the position-coding pattern; and a processor configured for: generating absolute motion data by determining a plurality of absolute positions of the sensing device relative to the surface using the imaged position-coding pattern; generating orientation data indicative of an orientation of the sensing device relative to the substrate; and using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of relative motion of the sensing device from the perspective of a user; and communication means for communicating the relative motion data to a computer system; and the computer system configured for: receiving said relative motion data from the sensing device; interpreting said relative motion data as cursor movement; and generating cursor control commands for said display device.
2. The system of claim 1, wherein the relative motion data is indicative of relative position changes of the sensing device substantially from a perspective of a user, and irrespective of an orientation of said substrate.
3. The system of claim 1 , wherein the position-coding pattern comprises a plurality of tags, each tag identifying a location on the surface and a rotational orientation of the tag relative to the substrate, thereby enabling a yaw of the sensing device relative to the substrate to be determined.
4. The system of claim 1, wherein said display device is selected from at least one of: a display device associated with the computer system a display device integral with the computer system; and a display device remote from the computer system.
5. The system of claim 1, wherein said sensing device is operable in a plurality of modes, said plurality including a cursor mode and at least one other mode, and wherein the computer system is further configured for: determining that said sensing device is operating in a cursor mode.
6. The system of claim 5, wherein said at least one other mode is selected from the group comprising: a scroll mode; a hyperlinking mode; a searching mode; a content- extraction mode; and a handwriting mode.
7. The system of claim 4, wherein said sensing device comprises a mode selector, and said interaction data comprises mode data indicative of said cursor mode.
8. The system of claim 7, wherein said mode selector comprises at least one of: one or more mode buttons operable by a user; and a sensor for detecting a force exerted by said sensing device on said surface.
9. The system of claim 5, wherein said computer system is configured for retrieving stored mode data indicative of a most recent mode selected for said sensing device.
10. The system of claim 5, wherein said computer system is further configured for: determining if said sensing device is positioned within a cursor zone of said substrate, said cursor zone being activated by determination of said cursor mode; and interpreting relative motion of said sensing device only within said cursor zone as said cursor movement.
11. The system of claim 10, wherein said computer system is further configured for: determining if said sensing device is positioned within a scroll zone of said substrate, said scroll zone being activated by determination of said cursor mode; and interpreting the interaction of said sensing device within said scroll zone as a scroll action; scrolling a page displayed on said display device according to said scroll action.
12. The system of claim 11, wherein said computer system is configured for at least one of: interpreting at least one absolute position of said sensing device within said scroll zone to be indicative of a scroll direction; and interpreting relative motion of said sensing device within said scroll zone to be indicative of a scroll direction.
13. The system of claim 1 , wherein the position-coding pattern is further indicative of an identity of the substrate and the interaction data comprises substrate identity data.
14. The system of claim 13, wherein the substrate is a cursor control substrate and said computer system is configured for using the substrate identity data to retrieve a cursor page description corresponding to said cursor control substrate, said cursor page description comprising a cursor zone within which the interaction of said sensing device is interpreted as said cursor movement.
15. The system of claim 14, wherein said cursor page description comprises a scroll zone within which the interaction of said sensing device is interpreted as a scroll action, and wherein said computer system is configured to scroll a page displayed on said display device according to said scroll action.
16. The system of claim 15, wherein said cursor control substrate has visible markings indicating at least one of: said cursor zone, said scroll zone and a scroll direction.
17. The system of claim 15, wherein said scroll zone is located at an edge region of said substrate.
18. A method of controlling movement of a cursor on a display device via a substrate having a position-coding pattern disposed on or in a surface thereof, said method comprising the steps of: receiving, in a computer system, interaction data indicative of an interaction of the sensing device with the substrate, said interaction data comprising: absolute motion data indicative of a plurality of absolute positions of the sensing device relative to the surface; and orientation data indicative of an orientation of the sensing device relative to the substrate; and using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of position changes of the sensing device relative to itself; interpreting said relative motion data as cursor movement; and generating cursor control commands for said display device.
19. A sensing device for controlling movement of a cursor on a display device, said sensing device comprising: an image sensor for optically imaging a position-coding pattern disposed on or in a surface; and a processor configured for: generating absolute motion data by determining a plurality of absolute positions of the sensing device relative to the surface using the imaged position- coding pattern; generating orientation data indicative of an orientation of the sensing device relative to the substrate; and using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of relative motion of the sensing device from the perspective of a user; and communication means for communicating the relative motion data to a computer system, thereby enabling the computer system to generate cursor control commands using the relative motion data for controlling movement of the cursor on the display device.
20. A computer system for controlling movement of a cursor on a display device via a substrate having a position-coding pattern disposed on or in a surface thereof, said computer system being configured for: receiving interaction data indicative of an interaction of the sensing device with the substrate, said interaction data comprising: absolute motion data indicative of a plurality of absolute positions of the sensing device relative to the surface; and orientation data indicative of an orientation of the sensing device relative to the substrate; using the orientation data to translate the absolute motion data into relative motion data, said relative motion data being indicative of position changes of the sensing device relative to itself; interpreting said relative motion data as cursor movement; and generating cursor control commands for said display device.
PCT/AU2008/000047 2007-02-08 2008-01-17 System for controlling movement of a cursor on a display device WO2008095227A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US88877507P 2007-02-08 2007-02-08
US60/888,775 2007-02-08

Publications (1)

Publication Number Publication Date
WO2008095227A1 true WO2008095227A1 (en) 2008-08-14

Family

ID=39681171

Family Applications (6)

Application Number Title Priority Date Filing Date
PCT/AU2008/000047 WO2008095227A1 (en) 2007-02-08 2008-01-17 System for controlling movement of a cursor on a display device
PCT/AU2008/000046 WO2008095226A1 (en) 2007-02-08 2008-01-17 Bar code reading method
PCT/AU2008/000048 WO2008095228A1 (en) 2007-02-08 2008-01-17 Method of sensing motion of a sensing device relative to a surface
PCT/AU2008/000124 WO2008095232A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising tags with x and y coordinate data divided into respective halves of each tag
PCT/AU2008/000123 WO2008095231A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising translation symbols for aligning cells with tags
PCT/AU2008/000125 WO2008122070A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising replicated and non-replicated coordinate data

Family Applications After (5)

Application Number Title Priority Date Filing Date
PCT/AU2008/000046 WO2008095226A1 (en) 2007-02-08 2008-01-17 Bar code reading method
PCT/AU2008/000048 WO2008095228A1 (en) 2007-02-08 2008-01-17 Method of sensing motion of a sensing device relative to a surface
PCT/AU2008/000124 WO2008095232A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising tags with x and y coordinate data divided into respective halves of each tag
PCT/AU2008/000123 WO2008095231A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising translation symbols for aligning cells with tags
PCT/AU2008/000125 WO2008122070A1 (en) 2007-02-08 2008-02-05 Coding pattern comprising replicated and non-replicated coordinate data

Country Status (7)

Country Link
US (28) US20080192234A1 (en)
EP (3) EP2132615A4 (en)
JP (1) JP4986186B2 (en)
CN (2) CN101636709A (en)
AU (2) AU2008213887B2 (en)
CA (2) CA2675689A1 (en)
WO (6) WO2008095227A1 (en)

Families Citing this family (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7213989B2 (en) * 2000-05-23 2007-05-08 Silverbrook Research Pty Ltd Ink distribution structure for a printhead
JP4556705B2 (en) * 2005-02-28 2010-10-06 富士ゼロックス株式会社 Two-dimensional coordinate identification apparatus, image forming apparatus, and two-dimensional coordinate identification method
US20080192234A1 (en) * 2007-02-08 2008-08-14 Silverbrook Research Pty Ltd Method of sensing motion of a sensing device relative to a surface
US20080256484A1 (en) * 2007-04-12 2008-10-16 Microsoft Corporation Techniques for aligning and positioning objects
US8126196B2 (en) * 2007-09-21 2012-02-28 Silverbrook Research Pty Ltd Method of imaging a coding pattern comprising reed-solomon codewords encoded by mixed multi-pulse position modulation
US8823645B2 (en) * 2010-12-28 2014-09-02 Panasonic Corporation Apparatus for remotely controlling another apparatus and having self-orientating capability
US7546694B1 (en) * 2008-04-03 2009-06-16 Il Poom Jeong Combination drawing/measuring pen
JP4385169B1 (en) 2008-11-25 2009-12-16 健治 吉田 Handwriting input / output system, handwriting input sheet, information input system, information input auxiliary sheet
US20100084478A1 (en) * 2008-10-02 2010-04-08 Silverbrook Research Pty Ltd Coding pattern comprising columns and rows of coordinate data
JP2010185692A (en) * 2009-02-10 2010-08-26 Hitachi High-Technologies Corp Device, system and method for inspecting disk surface
US8947400B2 (en) * 2009-06-11 2015-02-03 Nokia Corporation Apparatus, methods and computer readable storage mediums for providing a user interface
US8483448B2 (en) 2009-11-17 2013-07-09 Scanable, Inc. Electronic sales method
US20110128258A1 (en) * 2009-11-30 2011-06-02 Hui-Hu Liang Mouse Pen
WO2011091464A1 (en) * 2010-01-27 2011-08-04 Silverbrook Research Pty Ltd Coding pattern comprising registration codeword having variants corresponding to possible registrations
WO2011091465A1 (en) * 2010-01-27 2011-08-04 Silverbrook Research Pty Ltd Coding pattern comprising control symbols
US20110181916A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Method of encoding coding pattern to minimize clustering of macrodots
US8292190B2 (en) * 2010-01-27 2012-10-23 Silverbrook Research Pty Ltd Coding pattern comprising registration codeword having variants corresponding to possible registrations
US8276827B2 (en) * 2010-01-27 2012-10-02 Silverbrook Research Pty Ltd Coding pattern comprising control symbols
US8276828B2 (en) * 2010-01-27 2012-10-02 Silverbrook Research Pty Ltd Method of decoding coding pattern comprising control symbols
US20110180612A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Coding pattern comprising multi-ppm data symbols with minimal clustering of macrodots
US20110182514A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Method of decoding coding pattern having self-encoded format
US20110180602A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Method of imaging coding pattern using variant registration codewords
US20110182521A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Method of decoding coding pattern with variable number of missing data symbols positioned outside imaging field-of-view
US20110180611A1 (en) * 2010-01-27 2011-07-28 Silverbrook Research Pty Ltd Coding pattern comprising multi-ppm data symbols in a format identified by registration symbols
KR101785010B1 (en) 2011-07-12 2017-10-12 삼성전자주식회사 Nonvolatile memory device
JP5589597B2 (en) * 2010-06-22 2014-09-17 コニカミノルタ株式会社 Image forming apparatus, operation control method, and control program
US9025850B2 (en) * 2010-06-25 2015-05-05 Cireca Theranostics, Llc Method for analyzing biological specimens by spectral imaging
US8415968B2 (en) * 2010-07-30 2013-04-09 The Board Of Regents Of The University Of Texas System Data tag control for quantum-dot cellular automata
US9483677B2 (en) 2010-09-20 2016-11-01 Hid Global Corporation Machine-readable symbols
BR112013006486A2 (en) 2010-09-20 2016-07-26 Lumidigm Inc machine readable symbols.
TWI467462B (en) * 2010-10-01 2015-01-01 Univ Nat Taiwan Science Tech Active browsing method
US10620754B2 (en) * 2010-11-22 2020-04-14 3M Innovative Properties Company Touch-sensitive device with electrodes having location pattern included therein
BE1019809A3 (en) * 2011-06-17 2012-12-04 Inventive Designers Nv TOOL FOR DESIGNING A DOCUMENT FLOW PROCESS
US8619267B2 (en) * 2011-07-08 2013-12-31 Avago Technologies General Ip (Singapore) Pte. Ltd. Proximity sensor with motion detection
US20130033460A1 (en) * 2011-08-03 2013-02-07 Silverbrook Research Pty Ltd Method of notetaking using optically imaging pen with source document referencing
JP2013105367A (en) * 2011-11-15 2013-05-30 Hitachi Ltd Thin client system and server apparatus
US9389679B2 (en) * 2011-11-30 2016-07-12 Microsoft Technology Licensing, Llc Application programming interface for a multi-pointer indirect touch input device
CN102710978B (en) * 2012-04-12 2016-06-29 深圳Tcl新技术有限公司 The cursor-moving method of television set and device
TWI485577B (en) * 2012-05-03 2015-05-21 Compal Electronics Inc Electronic apparatus and operating method thereof
JP5544609B2 (en) * 2012-10-29 2014-07-09 健治 吉田 Handwriting input / output system
TW201423484A (en) * 2012-12-14 2014-06-16 Pixart Imaging Inc Motion detection system
US11287897B2 (en) * 2012-12-14 2022-03-29 Pixart Imaging Inc. Motion detecting system having multiple sensors
US9104933B2 (en) 2013-01-29 2015-08-11 Honeywell International Inc. Covert bar code pattern design and decoding
US20140340423A1 (en) * 2013-03-15 2014-11-20 Nexref Technologies, Llc Marker-based augmented reality (AR) display with inventory management
US20150058753A1 (en) * 2013-08-22 2015-02-26 Citrix Systems, Inc. Sharing electronic drawings in collaborative environments
KR20150057422A (en) * 2013-11-19 2015-05-28 한국전자통신연구원 Method for transmitting and receiving data, display apparatus and pointing apparatus
US9489048B2 (en) * 2013-12-13 2016-11-08 Immersion Corporation Systems and methods for optical transmission of haptic display parameters
WO2015099200A1 (en) * 2013-12-27 2015-07-02 グリッドマーク株式会社 Information input assistance sheet
US9582864B2 (en) 2014-01-10 2017-02-28 Perkinelmer Cellular Technologies Germany Gmbh Method and system for image correction using a quasiperiodic grid
TWI503707B (en) * 2014-01-17 2015-10-11 Egalax Empia Technology Inc Active stylus with switching function
EP3271862B1 (en) 2014-06-19 2020-08-05 Samsung Electronics Co., Ltd. Methods and apparatus for barcode reading and encoding
US20160188825A1 (en) * 2014-12-30 2016-06-30 Covidien Lp System and method for cytopathological and genetic data based treatment protocol identification and tracking
DE112015005883T5 (en) * 2015-01-30 2017-09-28 Hewlett-Packard Development Company, L.P. M-EARS CYCLIC CODING
CA3205046A1 (en) 2015-04-07 2016-10-13 Gen-Probe Incorporated Systems and methods for reading machine-readable labels on sample receptacles
US10037149B2 (en) * 2016-06-17 2018-07-31 Seagate Technology Llc Read cache management
MX2020001107A (en) 2017-07-28 2020-10-28 Coca Cola Co Method and apparatus for encoding and decoding circular symbolic codes.
US10859363B2 (en) 2017-09-27 2020-12-08 Stanley Black & Decker, Inc. Tape rule assembly with linear optical encoder for sensing human-readable graduations of length
WO2019199324A1 (en) 2018-04-13 2019-10-17 Hewlett-Packard Development Company, L.P. Surfaces with information marks
CN109163743B (en) * 2018-07-13 2021-04-02 合肥工业大学 Coding and decoding algorithm of two-dimensional absolute position measuring sensor
WO2020091764A1 (en) * 2018-10-31 2020-05-07 Hewlett-Packard Development Company, L.P. Recovering perspective distortions
EP3672251A1 (en) * 2018-12-20 2020-06-24 Koninklijke KPN N.V. Processing video data for a video player apparatus
US11195172B2 (en) * 2019-07-24 2021-12-07 Capital One Services, Llc Training a neural network model for recognizing handwritten signatures based on different cursive fonts and transformations
TWI790783B (en) * 2021-10-20 2023-01-21 財團法人工業技術研究院 Encoded substrate, coordinate-positioning system and method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050200610A1 (en) * 2002-10-24 2005-09-15 Anoto Ab Information processing system containing an arrangement for enabling printing on demand of positiom coded bases
US7048198B2 (en) * 2004-04-22 2006-05-23 Microsoft Corporation Coded pattern for an optical device and a prepared surface

Family Cites Families (108)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US237145A (en) * 1881-02-01 Fkancis h
AUPQ582900A0 (en) 2000-02-24 2000-03-16 Silverbrook Research Pty Ltd Printed media production
SE440721B (en) * 1982-04-02 1985-08-19 C G Folke Ericsson BACKGROUND MOTOR DEVICE
US4575581A (en) * 1984-02-06 1986-03-11 Edwin Langberg Digitizer and position encoders and calibration system for same
US4864618A (en) 1986-11-26 1989-09-05 Wright Technologies, L.P. Automated transaction system with modular printhead having print authentication feature
US4924078A (en) * 1987-11-25 1990-05-08 Sant Anselmo Carl Identification symbol, system and method
US5979768A (en) * 1988-01-14 1999-11-09 Intermec I.P. Corp. Enhanced bar code resolution through relative movement of sensor and object
US5051736A (en) * 1989-06-28 1991-09-24 International Business Machines Corporation Optical stylus and passive digitizing tablet data input system
DE9013392U1 (en) * 1990-09-21 1991-04-25 Siemens Nixdorf Informationssysteme Ag, 4790 Paderborn, De
US5202552A (en) * 1991-04-22 1993-04-13 Macmillan Bloedel Limited Data with perimeter identification tag
US5412194A (en) 1992-03-25 1995-05-02 Storage Technology Corporation Robust coding system
US5852434A (en) * 1992-04-03 1998-12-22 Sekendur; Oral F. Absolute optical position determination
US5477012A (en) * 1992-04-03 1995-12-19 Sekendur; Oral F. Optical position determination
CN1104791A (en) * 1993-12-30 1995-07-05 富冈信 Two dimensional code for processing data
JPH07225515A (en) * 1994-02-16 1995-08-22 Ricoh Co Ltd Developing device
US7387253B1 (en) * 1996-09-03 2008-06-17 Hand Held Products, Inc. Optical reader system comprising local host processor and optical reader
JP2788604B2 (en) * 1994-06-20 1998-08-20 インターナショナル・ビジネス・マシーンズ・コーポレイション Information display tag having two-dimensional information pattern, image processing method and image processing apparatus using the same
US5652412A (en) * 1994-07-11 1997-07-29 Sia Technology Corp. Pen and paper information recording system
US5661506A (en) * 1994-11-10 1997-08-26 Sia Technology Corporation Pen and paper information recording system using an imaging pen
JP2952170B2 (en) 1994-12-16 1999-09-20 オリンパス光学工業株式会社 Information reproduction system
US5852412A (en) * 1995-10-30 1998-12-22 Honeywell Inc. Differential ground station repeater
US6081261A (en) 1995-11-01 2000-06-27 Ricoh Corporation Manual entry interactive paper and electronic document handling and processing system
US5692073A (en) * 1996-05-03 1997-11-25 Xerox Corporation Formless forms and paper web using a reference-based mark extraction technique
US5862271A (en) 1996-12-20 1999-01-19 Xerox Corporation Parallel propagating embedded binary sequences for characterizing and parameterizing two dimensional image domain code patterns in N-dimensional address space
JP3856531B2 (en) * 1997-06-26 2006-12-13 山形カシオ株式会社 Coordinate data conversion method and apparatus
US6518950B1 (en) 1997-10-07 2003-02-11 Interval Research Corporation Methods and systems for providing human/computer interfaces
JPH11161731A (en) * 1997-11-27 1999-06-18 Olympus Optical Co Ltd Reading auxiliary member having code pattern
JPH11201797A (en) 1998-01-12 1999-07-30 Aichi Tokei Denki Co Ltd Display device of service water meter
WO1999050787A1 (en) 1998-04-01 1999-10-07 Xerox Corporation Cross-network functions via linked hardcopy and electronic documents
US6330976B1 (en) * 1998-04-01 2001-12-18 Xerox Corporation Marking medium area with encoded identifier for producing action through network
US6305608B1 (en) * 1998-06-04 2001-10-23 Olympus Optical Co., Ltd. Pen type code reader
US6964374B1 (en) * 1998-10-02 2005-11-15 Lucent Technologies Inc. Retrieval and manipulation of electronically stored information via pointers embedded in the associated printed material
US6088482A (en) * 1998-10-22 2000-07-11 Symbol Technologies, Inc. Techniques for reading two dimensional code, including maxicode
US6737591B1 (en) * 1999-05-25 2004-05-18 Silverbrook Research Pty Ltd Orientation sensing device
US7762453B2 (en) 1999-05-25 2010-07-27 Silverbrook Research Pty Ltd Method of providing information via a printed substrate with every interaction
US6428155B1 (en) * 1999-05-25 2002-08-06 Silverbrook Research Pty Ltd Printer cartridge including machine readable ink
US7055739B1 (en) 1999-05-25 2006-06-06 Silverbrook Research Pty Ltd Identity-coded surface with reference points
US7793824B2 (en) 1999-05-25 2010-09-14 Silverbrook Research Pty Ltd System for enabling access to information
US7105753B1 (en) 1999-05-25 2006-09-12 Silverbrook Research Pty Ltd Orientation sensing device
SE516522C2 (en) * 1999-05-28 2002-01-22 Anoto Ab Position determining product for digitization of drawings or handwritten information, obtains displacement between symbol strings along symbol rows when symbol strings are repeated on symbol rows
CA2374811C (en) 1999-05-28 2012-04-10 Anoto Ab Position determination
US7792298B2 (en) * 1999-06-30 2010-09-07 Silverbrook Research Pty Ltd Method of using a mobile device to authenticate a printed token and output an image associated with the token
AU2003900983A0 (en) 2003-03-04 2003-03-20 Silverbrook Research Pty Ltd Methods, systems and apparatus (NPT023)
AU2002952259A0 (en) * 2002-10-25 2002-11-07 Silverbrook Research Pty Ltd Methods and apparatus
AU2003900746A0 (en) 2003-02-17 2003-03-06 Silverbrook Research Pty Ltd Methods, systems and apparatus (NPS041)
SE517445C2 (en) * 1999-10-01 2002-06-04 Anoto Ab Position determination on a surface provided with a position coding pattern
US7015901B2 (en) 1999-10-25 2006-03-21 Silverbrook Research Pty Ltd Universal pen with code sensor
CN1312608C (en) 1999-10-25 2007-04-25 西尔弗布鲁克研究股份有限公司 Category buttons on interactive paper
ATE453899T1 (en) * 1999-10-26 2010-01-15 Datalogic Spa METHOD FOR RECONSTRUCTING A STRIP CODE BY SEQUENTIAL SCANNING
EP1107064A3 (en) * 1999-12-06 2004-12-29 Olympus Optical Co., Ltd. Exposure apparatus
US6992655B2 (en) * 2000-02-18 2006-01-31 Anoto Ab Input unit arrangement
US6864880B2 (en) * 2000-03-21 2005-03-08 Anoto Ab Device and method for communication
US7143952B2 (en) * 2000-03-21 2006-12-05 Anoto Ab Apparatus and methods relating to image coding
US20060082557A1 (en) * 2000-04-05 2006-04-20 Anoto Ip Lic Hb Combined detection of position-coding pattern and bar codes
JP4376425B2 (en) 2000-05-08 2009-12-02 株式会社ワコム Variable capacitor and position indicator
US6830198B2 (en) * 2000-06-30 2004-12-14 Silverbrook Research Pty Ltd Generating tags using tag format structure
EP1410281A2 (en) * 2000-07-10 2004-04-21 BMC Software, Inc. System and method of enterprise systems and business impact management
US6592039B1 (en) * 2000-08-23 2003-07-15 International Business Machines Corporation Digital pen using interferometry for relative and absolute pen position
AUPR440901A0 (en) 2001-04-12 2001-05-17 Silverbrook Research Pty. Ltd. Error detection and correction
JP3523618B2 (en) 2001-08-02 2004-04-26 シャープ株式会社 Coordinate input system and coordinate pattern forming paper used for the coordinate input system
US7145556B2 (en) 2001-10-29 2006-12-05 Anoto Ab Method and device for decoding a position-coding pattern
US6964437B2 (en) * 2001-12-14 2005-11-15 Superba (Societa Anonyme) Process and device for knotting a yarn on a spool
US6966493B2 (en) * 2001-12-18 2005-11-22 Rf Saw Components, Incorporated Surface acoustic wave identification tag having enhanced data content and methods of operation and manufacture thereof
SE520748C2 (en) * 2001-12-27 2003-08-19 Anoto Ab Activation of products with embedded functionality in an information management system
AU2003228476A1 (en) * 2002-04-09 2003-10-27 The Escher Group, Ltd. Encoding and decoding data using angular symbology and beacons
US20030197878A1 (en) * 2002-04-17 2003-10-23 Eric Metois Data encoding and workpiece authentication using halftone information
US20040070616A1 (en) * 2002-06-02 2004-04-15 Hildebrandt Peter W. Electronic whiteboard
US6906416B2 (en) * 2002-10-08 2005-06-14 Chippac, Inc. Semiconductor multi-package module having inverted second package stacked over die-up flip-chip ball grid array (BGA) package
CA2502483C (en) 2002-10-25 2010-12-21 Silverbrook Research Pty Ltd Orientation-indicating cyclic position codes
WO2004051557A1 (en) * 2002-12-03 2004-06-17 Silverbrook Research Pty Ltd Rotationally symmetric tags
US7502507B2 (en) * 2002-10-31 2009-03-10 Microsoft Corporation Active embedded interaction code
SE0301143D0 (en) * 2003-04-17 2003-04-17 C Technologies Ab Method and device for loading data
JP4708186B2 (en) * 2003-05-02 2011-06-22 豊 木内 2D code decoding program
US7637430B2 (en) * 2003-05-12 2009-12-29 Hand Held Products, Inc. Picture taking optical reader
US6833287B1 (en) * 2003-06-16 2004-12-21 St Assembly Test Services Inc. System for semiconductor package with stacked dies
US7364081B2 (en) * 2003-12-02 2008-04-29 Hand Held Products, Inc. Method and apparatus for reading under sampled bar code symbols
US7853193B2 (en) * 2004-03-17 2010-12-14 Leapfrog Enterprises, Inc. Method and device for audibly instructing a user to interact with a function
JP4301986B2 (en) 2004-03-30 2009-07-22 アルパイン株式会社 Complex information processing device
US7342575B1 (en) * 2004-04-06 2008-03-11 Hewlett-Packard Development Company, L.P. Electronic writing systems and methods
CA2567253A1 (en) * 2004-05-18 2005-11-24 Silverbrook Research Pty Ltd Pharmaceutical product tracking
TWI236239B (en) * 2004-05-25 2005-07-11 Elan Microelectronics Corp Remote controller
US7672532B2 (en) 2004-07-01 2010-03-02 Exphand Inc. Dithered encoding and decoding information transference system and method
US7656395B2 (en) * 2004-07-15 2010-02-02 Microsoft Corporation Methods and apparatuses for compound tracking systems
GB0417069D0 (en) * 2004-07-30 2004-09-01 Hewlett Packard Development Co Methods, apparatus and software for validating entries made on a form
JP2008508621A (en) 2004-08-03 2008-03-21 シルバーブルック リサーチ ピーティワイ リミテッド Walk-up printing
US7166924B2 (en) * 2004-08-17 2007-01-23 Intel Corporation Electronic packages with dice landed on wire bonds
TWI276986B (en) * 2004-11-19 2007-03-21 Au Optronics Corp Handwriting input apparatus
JP4235167B2 (en) 2004-12-13 2009-03-11 新日本製鐵株式会社 Coil inner diameter holding jig and coil insertion method using the same
US20060139338A1 (en) * 2004-12-16 2006-06-29 Robrecht Michael J Transparent optical digitizer
KR20070112148A (en) * 2005-02-23 2007-11-22 아노토 아베 Method in electronic pen, computer program product, and electronic pen
JP4556705B2 (en) * 2005-02-28 2010-10-06 富士ゼロックス株式会社 Two-dimensional coordinate identification apparatus, image forming apparatus, and two-dimensional coordinate identification method
US7729539B2 (en) 2005-05-31 2010-06-01 Microsoft Corporation Fast error-correcting of embedded interaction codes
GB2428124B (en) * 2005-07-07 2010-04-14 Hewlett Packard Development Co Data input apparatus and method
AU2006274486B2 (en) 2005-07-25 2009-09-17 Silverbrook Research Pty Ltd Product item having coded data identifying a layout
US7858307B2 (en) * 2005-08-09 2010-12-28 Maxwell Sensors, Inc. Light transmitted assay beads
US7622182B2 (en) * 2005-08-17 2009-11-24 Microsoft Corporation Embedded interaction code enabled display
JP4586677B2 (en) * 2005-08-24 2010-11-24 富士ゼロックス株式会社 Image forming apparatus
US7605476B2 (en) * 2005-09-27 2009-10-20 Stmicroelectronics S.R.L. Stacked die semiconductor package
GB2431032A (en) * 2005-10-05 2007-04-11 Hewlett Packard Development Co Data encoding pattern comprising markings composed of coloured sub-markings
JP2007115201A (en) * 2005-10-24 2007-05-10 Fuji Xerox Co Ltd Electronic document management system, medical information system, printing method of chart form and chart form
JP2007145317A (en) 2005-10-28 2007-06-14 Soriton Syst:Kk Taking off and landing device of flying body
GB2432341B (en) 2005-10-29 2009-10-14 Hewlett Packard Development Co Marking material
US7934660B2 (en) * 2006-01-05 2011-05-03 Hand Held Products, Inc. Data collection system having reconfigurable data collection terminal
WO2007145317A1 (en) * 2006-06-16 2007-12-21 Pioneer Corporation Two-dimensional code pattern, two-dimensional code pattern display device, and its reading device
JP4375377B2 (en) 2006-09-19 2009-12-02 富士ゼロックス株式会社 WRITING INFORMATION PROCESSING SYSTEM, WRITING INFORMATION GENERATION DEVICE, AND PROGRAM
US7479236B2 (en) * 2006-09-29 2009-01-20 Lam Research Corporation Offset correction techniques for positioning substrates
US8291346B2 (en) * 2006-11-07 2012-10-16 Apple Inc. 3D remote control system employing absolute and relative position detection
US20080192234A1 (en) * 2007-02-08 2008-08-14 Silverbrook Research Pty Ltd Method of sensing motion of a sensing device relative to a surface

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050200610A1 (en) * 2002-10-24 2005-09-15 Anoto Ab Information processing system containing an arrangement for enabling printing on demand of positiom coded bases
US7048198B2 (en) * 2004-04-22 2006-05-23 Microsoft Corporation Coded pattern for an optical device and a prepared surface

Also Published As

Publication number Publication date
CA2675689A1 (en) 2008-08-14
US7905406B2 (en) 2011-03-15
AU2008213886A1 (en) 2008-08-14
US8016204B2 (en) 2011-09-13
US20080191016A1 (en) 2008-08-14
US20080191024A1 (en) 2008-08-14
EP2132615A4 (en) 2012-02-08
US20080193045A1 (en) 2008-08-14
US7913923B2 (en) 2011-03-29
AU2008213886B2 (en) 2010-04-22
EP2118723A1 (en) 2009-11-18
CN101606167A (en) 2009-12-16
WO2008095231A1 (en) 2008-08-14
US20080193053A1 (en) 2008-08-14
US20080191041A1 (en) 2008-08-14
US7604182B2 (en) 2009-10-20
US20080191021A1 (en) 2008-08-14
EP2118819A1 (en) 2009-11-18
US20080193044A1 (en) 2008-08-14
US20080192022A1 (en) 2008-08-14
WO2008095226A1 (en) 2008-08-14
US20080191017A1 (en) 2008-08-14
US7905405B2 (en) 2011-03-15
AU2008213887A1 (en) 2008-08-14
US20080193054A1 (en) 2008-08-14
US20080273010A1 (en) 2008-11-06
US20080192006A1 (en) 2008-08-14
US20080193007A1 (en) 2008-08-14
US8028925B2 (en) 2011-10-04
US20080191018A1 (en) 2008-08-14
US8107733B2 (en) 2012-01-31
WO2008095228A1 (en) 2008-08-14
US20080192234A1 (en) 2008-08-14
US8006912B2 (en) 2011-08-30
US20080191037A1 (en) 2008-08-14
US8011595B2 (en) 2011-09-06
US20080191036A1 (en) 2008-08-14
WO2008095232A1 (en) 2008-08-14
JP4986186B2 (en) 2012-07-25
US8118234B2 (en) 2012-02-21
US20080191019A1 (en) 2008-08-14
US20110084141A1 (en) 2011-04-14
CN101606167B (en) 2012-04-04
EP2118723A4 (en) 2011-06-29
US7959081B2 (en) 2011-06-14
US8416188B2 (en) 2013-04-09
US20080191038A1 (en) 2008-08-14
US7878404B2 (en) 2011-02-01
CA2675693A1 (en) 2008-08-14
US20080191040A1 (en) 2008-08-14
US8320678B2 (en) 2012-11-27
US20130075482A1 (en) 2013-03-28
AU2008213887B2 (en) 2010-04-22
US20080192004A1 (en) 2008-08-14
EP2118819A4 (en) 2011-05-04
WO2008122070A1 (en) 2008-10-16
US20080191020A1 (en) 2008-08-14
US20100001083A1 (en) 2010-01-07
US20120006906A1 (en) 2012-01-12
EP2132615A1 (en) 2009-12-16
US20080191039A1 (en) 2008-08-14
US7905423B2 (en) 2011-03-15
JP2010518496A (en) 2010-05-27
US20080193030A1 (en) 2008-08-14
US8107732B2 (en) 2012-01-31
US8204307B2 (en) 2012-06-19
CN101636709A (en) 2010-01-27
US7793855B2 (en) 2010-09-14

Similar Documents

Publication Publication Date Title
US8416188B2 (en) System for controlling movement of a cursor on a display device
CA2388135C (en) Sensing device with interchangeable nibs
AU2004201007B2 (en) Coded Surface
US7891253B2 (en) Capacitive force sensor
US7091960B1 (en) Code sensor attachment for pen
US20090080691A1 (en) Method of generating a clipping from a printed substrate
AU2008238595B2 (en) Sensing device having capacitive force sensor
AU2003262335B2 (en) Sensing device with interchangeable nibs
AU2003254771B2 (en) Method and system for capturing a note-taking session using processing sensor
AU2001216798A1 (en) Code sensor attachment for pen

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08700345

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08700345

Country of ref document: EP

Kind code of ref document: A1