WO2004111823A1 - A chordic engine for data input - Google Patents

A chordic engine for data input Download PDF

Info

Publication number
WO2004111823A1
WO2004111823A1 PCT/AU2004/000797 AU2004000797W WO2004111823A1 WO 2004111823 A1 WO2004111823 A1 WO 2004111823A1 AU 2004000797 W AU2004000797 W AU 2004000797W WO 2004111823 A1 WO2004111823 A1 WO 2004111823A1
Authority
WO
WIPO (PCT)
Prior art keywords
chord
chordic
data
computer program
user
Prior art date
Application number
PCT/AU2004/000797
Other languages
French (fr)
Inventor
Bruce William Macdonald
Original Assignee
Australian Institute Of Marine Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Australian Institute Of Marine Science filed Critical Australian Institute Of Marine Science
Publication of WO2004111823A1 publication Critical patent/WO2004111823A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0235Character input methods using chord techniques

Definitions

  • the dashed lines represent the boundary of five notional regions within which the indicia are bounded.
  • the indicia are shaded, and for convenience are as a type termed "bar chords".
  • Bar chords a number of the bar chords are not formed by a contiguous key press and these special chords are termed “hollow bar chords”.
  • Also shown beside each indicium is the corresponding "raw chord value".
  • the thumb origin is at the left hand side, and the notional regions are represented in a binary form so that any chord also can be represented as a unique decimal number - this is useful in an implementation of an embodiment of the invention as will become apparent.
  • the binary convention is reversed, increasing from 0-31 from right to left, but equally could be increasing from 0-31 from left to right.
  • a library of conventions can be chosen, and as follows from the example presented hereinbefore, it is apparent that the left hand side and/or bottom of an indicium represents the thumb origin. The thumb is always the starting point - this is particularly important if less than five keys are provided. Horizontal takes precedence over vertical as does clockwise over anticlockwise. All indicium must satisfy the rule that they are resolvable into a 'press path'.
  • Table 1 included hereinafter in Appendix A for illustrative purposes, is consistent with the flow diagrams of Figs. 8 and 9.
  • the coding is written to run on the program ToolBookTM version 3.0, which is a WindowTM-based multi-media authoring tool published by Asymetrix Corporation of the United States. The code is commented, as indicated by the prefix "--".
  • Fig. 21 is a flow diagram of a method for processing keypress data.
  • a human/machine interface can be implemented by means of a chordic input device connected to a chordic engine for decoding or interpreting a predefined set of chords or chord sequences generated by a user of the chordic input device.
  • a chordic input device connected to a chordic engine for decoding or interpreting a predefined set of chords or chord sequences generated by a user of the chordic input device.
  • Such an arrangement may be used to implement a Chordic Graphical User Interface (CGUI), wherein connection of the computing platform to a display device provides a means for visually indicating to a user which fingers or keys to press in order to generate one or more defined chords to activate a particular function.
  • a static display is used to visually indicate to a user which fingers or keys to press in order to generate one or more defined chords to activate a particular function.
  • Static displays provide static or unchanging information and include electronic displays, optical displays and printed text (eg. printed labels).
  • the number of predefined chords can also be increased by measurement of parametric values such as pressure and distance or displacement.
  • Pressure or force can be measured in two ways - either during travel of the switch mechanism against a resisting force or by measurement of the final pressure applied when a switch mechanism is at the end of its travel.
  • measured pressure can be used to overload a chord and thus indicate alternate events.
  • a chord of a certain value will result in different events or actions, depending on how hard the chord is pressed.
  • Distance can be measured by means of a resistance value that is proportional to the amount of displacement of a switch mechanism (eg. a linear potentiometer) and used in a manner similar to measured pressure.
  • Singlet events are defined as chord events that comprise a single chord action (ie. a particular key or combination of keys pressed once by a user to effect a particular action). Singlet events are typically used for a wide range of single action controls and represent the basic building block for more complex sequential chord events. Both a make singlet and a break singlet can be made to occur instantly or after a delay.
  • Figs. 12a and 12b represent general timing diagrams of couplets.
  • the figures show the internal structure and sequence of actions and corresponding events over time, hi each case, the vertical axis 1200 represents the states of the switches, whilst the horizontal axis 1201 represents time duration.
  • the time axis is devoid of units as it represents changes of state rather than specific passages or periods of time.
  • a break couplet can be timed or untimed. Timed couplets impose a maximum time period between break of the first singlet event and make of the second singlet event and result in cancellation of the couplet initiated upon timeout error occurring. Untimed break couplets wait forever between break of the first singlet event and make of the second singlet event and result in cancellation on error, hi other words, the only event that will complete an untimed break couplet is a valid chord for the second singlet event. An invalid chord for the second singlet event will result in cancellation of the first singlet event.
  • CGUI Chordic Graphical User Interface
  • GUI Graphical User Interface
  • ALT-TAB enables a user to navigate between windows.
  • the TAB key enables transfer of the focus from control to control and the SPACE key enables alternate selection and deselection of the control that currently has focus.
  • the keyboard arrow keys enable movement within sets of controls. Focus is implicit in the operation of a mouse or other pointing device in that the current focus is determined by the position of a user click or selection and will so remain until a different focus context, such as a different window, is clicked or selected.
  • focus is managed by the underlying operating system in a manner that is transparent to the user and causes little problem for the interface designer or programmer.
  • chord focus is the set of controls that are currently active and that are the subject of any forthcoming chord events (ie. the chord focus context).
  • a window has chord focus and the contents of the window are the chord focus context. No control within the context of the window can alter the chord focus. It should be noted, however, that it might be necessary to track the most recently activated control, as this information is not implicitly retained. In other words, activating a singlet control never provides chord focus, even though a selection may be active for a period of time (eg. a spinner control held down for a certain duration).
  • chord context is the set of current controls or components that will receive and/or be activated by the next chord event. Much like the concept of focus in a conventional windowing GUI, the chord context thus defines where the next input from the user will be directed. Unlike a conventional GUI, where input focus generally follows user selection by mouse click, maintenance of the current context in a chordic-based system requires accurate tracking of previous chords generated by the user.
  • a chord context typically contains one or more chord controls or sub-contexts. In a complex interface there may be many levels of chord sub-context extending in a tree-like structure. A chord context at one level higher than the current chord context in the tree-like structure is termed the chord super context.
  • a typical difficulty index for an experienced user of a 5-bit chord set, in order of increasing difficulty, is listed hereinafter:
  • in a five-key chordic system, has all bits set and possesses the unique property of being able to be generated regardless of the current chord being pressed or held. This chord is thus ideally suited for use as a global chord of high significance.
  • chord Event Decoding Two types of actions primarily result from a chord event or a sequence of chord events. Either an action is executed (eg. the system beeps or a backlight is turned on) or a context change is made. In the latter case, the first chord of a couplet typically effects a context change such that the second chord of the couplet is made available. It is the second chord that actually delivers the effect of the couplet. It is also possible to utilise timing such that if a couplet is not completed within a specific interval, a reversal of the context change occurs and the couplet is effectively cancelled.
  • an action eg. the system beeps or a backlight is turned on
  • a context change is made.
  • the first chord of a couplet typically effects a context change such that the second chord of the couplet is made available. It is the second chord that actually delivers the effect of the couplet. It is also possible to utilise timing such that if a couplet is not completed within a specific interval, a reversal
  • Fig. 15 shows a flow chart of a timed break couplet implementation that utilises a chord stack.
  • Step 1505 represents waiting for a chord press, and once such is received, step 1510 determines whether the chord pressed is a singlet. If so, step 1515 sends the singlet pressed and there is a return to step 1505. If the result of step 1510 is no, then step 1520 tests whether for the present couplet, the second part is being awaited. If no, step 1525 determines whether it is the first part, and if yes, step 1535 ignores the chord and returns to step 1505 to await a further chord. If, however, it is not the first part of the chord, then step 1230 pushes the chord onto the chord stack and returns to step 1505.
  • Fig. 16 shows a binary timing diagram of an example of a chord microstructure for the chord sequence or couplet ⁇
  • Horizontal displays 1611 to 1615 relate to key activations by the thumb and 4 fingers, respectively, of a human hand.
  • the thumb key is activated first, followed by the third, second and first 0 fingers, respectively.
  • the delays in discretely activating the various keys are classed as "minor make gaps” and release of the various keys is classed as "minor breaks”.
  • a minor break 1620 also occurs during activation of the second finger keypress 1613 but is remade during the intra-chord width 1630.
  • .. is activated by the thumb and first and second fingers.
  • smell and heat can only convey a very simple message, which may be cardinal in nature.
  • An embodiment using electrical impulses comprises a wrist band with contact pads that deliver mild electric currents to stimulate a wearer's muscles. When adequately spaced, such stimuli can be decoded or resolved by the wearer into distinct patterns of information.
  • Sounds can also be used in a descriptive manner to alert or confirm to a user what choice has been made.
  • a typical use of a descriptive sound in a visual interface is a "beep" or other error sound played when a user makes an incorrect or invalid choice.
  • sounds are used to indicate both choices and confirmations or corrections.
  • a simple example of the latter technique is a pentatonic scale, wherein musical chords can be assembled using up to five different tones.
  • the pentatonic scale method whilst useful, can only effectively convey a subset of the available chords as certain chords are difficult for a listener to distinguish from others.
  • Dissonance can be used to ameliorate this problem. That is, rather than using harmonious sounds to represent chords to be pressed, dissonant sounds can be used.
  • the advantage of using dissonance is that the range of available chords that can be represented is no longer constrained by the limits of the ear to distinguish between differing but seemingly similar harmonious sounds.
  • Descriptive sounds are used during or after chord presses to provide feedback to a user. For example, telephone keypads produce a different tone for each button pressed.
  • the tones act as an error correcting cue, as a particular sequence of tones becomes associated with a particular number, hi a similar manner, a user of a chordic input device can receive sounds that provide feedback about the correctness of the chord being pressed, hi a non-visual environment, this allows a user to control chordic software and obtain feedback about current chord presses.
  • certain sounds will become associated with actions, thus providing a separate error checking method for a non-novice user.
  • prescriptive and descriptive sounds are directed to actual chord presses.
  • similar techniques can be applied to content, whereby menus and available operations are conveyed to a user/listener by spoken language or a defined mapping of sounds to actions. For example, the command "Save" might be associated with a particular non-language sound.
  • This technique can be used for both prescriptive and descriptive situations.
  • the foregoing methods can also be practiced using a system of low frequency vibrations or other tactile sensations.
  • the notation can be extended to encompass the entire range of micro event possibilities, hi practise, however, only a subset of the notation is used as the representations become clumsy if taken to completeness for any given type of chord event. This has relevance for non-visual interfaces on account of the possibility of displaying a practically unlimited sub-set of the available chord event space to a user. Thus, the fundamental relationship between any non-speech, non-visual interface and a preceding visual interface can be serviced well beyond the capacity of a user.
  • chord fingering notation This is typically done by means of notch filters, wherein the timings are matched to a cardinal scale. These normalised events can then be compared to the database of defined chord events.
  • chord fingering notation The primary emphasis of the chord fingering notation described hereinbefore is for human readability. However, a higher level “chord event description language" could be designed that would enable machine parsing and correct disambiguation.
  • the chordic engine provides services to a user interface. Such services can be delivered to both a user interface engine or an actual user.
  • One such service that the chordic engine provides to a user is the ability for the user to alter the current chord state (continually and revocably).
  • the user may choose from a known set of chord mappings such that a chordic input device can be configured to respond from initialisation in a particular and desirable way. For example, in one embodiment, pressing a particular chord sequence such as ⁇
  • chord "beats" is described hereinafter in this document.
  • Fig. 18 is a schematic block diagram of a device for implementing a 5-bit embodiment of the chordic engine.
  • the device 1800 includes a processor or processing means 1810 and a communications interface 1820.
  • the processor or processing means 1810 is preferably implemented using any microprocessor or microcontroller, but can also be implemented by means of a state machine or other discrete circuit elements that are known in the art of electronic circuit design. Commonly available microprocessors and microcontrollers include features such as on-board memory (eg. random access memory (RAM), read-only memory (ROM) and rewritable memory (EEPROM, flash memory, etc.), timers, counters and communication interfaces, all of which can be used to implement embodiments of the present invention.
  • the processor or processing means 1810 is connected to, and communicates with, the communications interface 1820 by means of a bus or other internal communication link 1830.
  • the processor 1810 includes 6 input lines (pins 1 to 6), including a common (COM) 5 and input lines Ii to I 5 for receiving chord from a chordic input device (not shown). Input lines Ii to I 5 map onto the thumb and 4 fingers of a human hand.
  • Input devices are coupled to the chordic engine via one or more input interfaces.
  • sensors and/or switches that detect which side of the chordic input device is being held by a user provide such information to the chordic engine via one or more input s interfaces. This enables the chordic engine to detect which hand the user is holding the chordic input device with. Sensors and/or switches can also be used to detect vibration, movement and orientation. Data from these sensors and/or switches is also delivered to the chordic engine via the one or more input interfaces and can be used to alter certain characteristics such as increasing the hold times for chords when the user is in a high 0 vibration environment.
  • the device 1800 comprises an Application Specific Integrated Circuit (ASIC), which can be integrated into apparatus such as a chordic input device.
  • ASIC Application Specific Integrated Circuit
  • the processor or processing means 1810 and communications interface 1820 can comprise any components available to an ASIC designer that are functionally and economically suitable.
  • the ASIC need not be restricted to two components; a single component or multiple components may suffice, as the case may be.
  • Fig. 20 is a flow diagram of a method for authenticating users of a chordic input system.
  • Keypress data is received from a user at step 2010.
  • Time durations between discrete key presses of the user that form part of a chord are determined at step 2020.
  • the time durations determined in step 2020 are compared to stored time durations at step 2030.
  • the user is authenticated at step 2040 if the difference between the determined time durations and the stored time durations is less than a predefined threshold.

Abstract

Embodiments of a chordic engine, methods and computer program products for processing chordic input are disclosed. The chordic engine identifies at least one predefined chord in keypress data received from a chordic input device (step 1920), obtains a parametric value relating to the at least one predefined chord (step 1930), and identifies a predefined chordic command based on the at least one identified chord and the parametric value (step 1940). In certain embodiments, users may be authenticated by analysis of time durations between discrete key presses that fonn part of a chord. In certain embodiments, a plurality of representations are output in a non-visual manner to indicate to a user which combination of one or more keys of the input device must be pressed to effect specific instructions or data input.

Description

A CHORDIC ENGINE FOR DATA INPUT
FIELD OF THE INVENTION
This invention relates to a chordic engine for implementing a human/machine interface. The chordic engine is particularly, but not exclusively suited to use with computing devices implemented in a compact format, and for computing devices to be used in mobile and adverse environments.
BACKGROUND This application is related to Australian Patent No. 693553 (and corresponding US
Patent No. 5,900,864 and European patent application having publication no EP 0 776 550), which is incorporated herein in its entirety by reference.
Australian Patent No. 693553 describes a device and method whereby one or more indicia are displayed. Each indicium represents a user initiated instruction or data input, and is displayed in a manner to indicate which combination of one or more digits effect the instruction or data input. A plurality of keys provide for user inputs, and the keys are physically arranged to match the sequential relation of the digits. Thus the manner of displaying each indicium indicates which respective one or more keys are to be activated. Selected portions of Australian Patent No. 693553 are included hereinafter for the reader's convenience.
Fig. 1 shows a schematic arrangement for a human/machine interface implemented on a computing device. An input device 10 is coupled with a processing device 30, in turn coupled with a display device 50. The input device 10 has five input keys, which are denoted as 4, 3, 2, 1 and T. As is apparent from the representation of the human hand, each of the fingers and the thumb correspond sequentially to the like-referenced keys, and any activation of one or a combination of keys produces "a chord". The processing device 30 can be any off-the-shelf personal computer, although for mobile applications should be compact and rugged in nature, for example, in the PC/104 format. The display device 50 also can be conventional, although again it is preferred that it be as compact as possible, and in this respect a head-mounted display device is particularly suitable. In operation, the processing device 30 causes a display of information on the display device 50. In response to that information, the user inputs instructions or data by means of keyed chords. Indicia displayed on the display 50 correspond directly to a particular chord, thus giving effect to either the instruction or data input represented by the indicium. As noted above, the principle is essentially 'What You See Is What You Press' (WYSIWYP).
Fig. 2 shows a functional block diagram of a computing device upon which a human/machine interface can be implemented. The input device 10 comprises a multi- switch unit 20 constituted by the keys 4-1, T shown in Fig. 1. The switch unit 20 is connected to a debounce and latch circuit 22 that is configured to output signals only on release of the keys, as opposed to on initial depression of the keys. In this way, individual keys in a combination of keys can be depressed separately in time, however only the total combination is signalled when all those keys are coincidentally released. The output from the debounce and latch circuit 22 passes to a bi-directional parallel port 24 of a conventional type, with the output thereof constituting a bit pattern that can be decoded by software resident in the processing device 30. In one preferred form, the input bit pattern is binary coded leading to a raw decimal value for any chord as a combination of key presses. In the same way as a conventional keyboard inputs to a personal computer, the input bit pattern functions as an interrupt to the operating system. The interrupt function is represented by the interrupt handler 32. The interrupts then pass via an input parser 34 to a command manager 36. The command manager 36 interacts with the host operating system 38, a feedback manager 40 and associated audio handler 42 and applications software 44. The command manager 36 also co-operates with the visual handler 46, which in turn drives the display device 50. The audio handler serves a particular function in providing an auditory feedback mechanism to the user on activation of any key. A pentatonic scaling is preferred, which is the division of one octave into five discrete frequencies. The division is not uniform, but rather based on a temporal consideration of what sounds pleasant to the ear. A chording of more than one key results in the reproduction/feedback of the respective notes in combination. Pentatonic scales per se are well known. The software component of the interface is essentially transparent to the user. The user is concerned only with interface presented on the display unit 50 as the interface relates to the sequential relation of the keys equating or mapping onto the digits of the hand. Figs. 3a and 3b show the complete set (or 'galaxy') of chords for a five-button input device. The dashed lines represent the boundary of five notional regions within which the indicia are bounded. The indicia are shaded, and for convenience are as a type termed "bar chords". Note that a number of the bar chords are not formed by a contiguous key press and these special chords are termed "hollow bar chords". Also shown beside each indicium is the corresponding "raw chord value". By a convention, the thumb origin is at the left hand side, and the notional regions are represented in a binary form so that any chord also can be represented as a unique decimal number - this is useful in an implementation of an embodiment of the invention as will become apparent. The binary convention is reversed, increasing from 0-31 from right to left, but equally could be increasing from 0-31 from left to right.
Fig. 3 c shows five different styles of chord representations, for, as it turns out, the chords having the decimal values 1-5. The first (left-most) column shows bar chords for which the partitioning into five regions has to be visualised without a graphical prompt - compare this with the bar chords in the third column. In the second column, the boundary of the five regions is represented by the bottom horizontal line and the key(s) to be chorded by the vertical line marks. This representation is termed "glyph chords". The fourth column represents the vertical, rather than horizontal orientation of bar chords, with the thumb origin occurring at the bottom. The fifth column representation of dots and line marks are termed "dot chords".
Fig. 3d shows the galaxy of glyph chords together with their raw chord value.
Fig. 4a and 4b show an alternative representation for couplet chording for a number of indicia 150-156. Also shown is the corresponding dot chord representation. The convention adopted is that horizontal takes precedence over vertical, thus in the case of Fig. 4a leading to the couplet "4" and "4", and for Fig. 4b, "3 & 4" and "T & 4".
Fig. 5a shows an abbreviated form of an indicium 160 representing three bar chords. The two tabs 162, 164 located along the bottom edge of the indicium 160 indicate that "T & 1" are required to access any one of the three superimposed sub-indicia 166-170. The dot chord representation for those three sub-indicia 166-170 also are shown. In Fig. 5b, the tabs 182, 184 are located to the right hand side of the indicium 180, indicating the keys "3 & 4" are required as well as the corresponding key of the superimposed sub-indicia 186-190 to effect a chord.
A library of conventions can be chosen, and as follows from the example presented hereinbefore, it is apparent that the left hand side and/or bottom of an indicium represents the thumb origin. The thumb is always the starting point - this is particularly important if less than five keys are provided. Horizontal takes precedence over vertical as does clockwise over anticlockwise. All indicium must satisfy the rule that they are resolvable into a 'press path'.
Fig. 6 shows a mechanical arrangement for the input device 10. The body 11 is sized to fit into the palm of the hand of a user, and in the configuration shown is suited to use by the left hand. The thumb therefore wraps around the side of the body 11 whilst the four fingers wrap across the top. The digits therefore can activate the keys in a manner previously described. Such an arrangement provides the advantage of being useable in a mobile configuration. The body 11 is securely grasped in the palm of the hand, and the digits do not have to move other than in a gross closing motion to activate the keys. The configuration shown is easily adapted for use by the right hand simply by flipping it over, therefore is completely ambidextrous.
Figs. 7a and 7b show an alternative arrangement for an input device 10 . The handheld device 10 comprises two body portions 12, 13 hingedly connected together. The arrangement of the keys with respect to each of the body parts 12, 13 is shown. The particular advantage of this configuration is that it is ambidextrous, and also can be used either in the manner shown for the controller 10 of Fig. 6 when grasped in the palm of the hand, in which case the respective body parts 12, 13 are arranged at an acute angle to each other, or with the body parts in a common plane for use as an on-bench keyboard.
It would be readily apparent to one skilled in the programming arts how to write code to implement the embodiments of the invention hereinbefore described without the exercise of any inventive faculty. In this connection, Figs. 8 and 9 are flow diagrams of the chording methodology and the couplet methodology. In Fig. 8, step 820 detects a key press. If, in step 830, a chord is to be formed on release, then step 850 checks to determine whether all the keys are released, and if so step 860 sends the value of the chord keyed. If the chord is not to be formed on release, step 840 delays sending of the chord keyed.
Fig. 9 concerns an extension of Fig. 8 to the forming of couplets and shows a flow diagram of a process for identifying a timed break couplet. Step 905 represents waiting for a chord press, and once such is received, step 910 determines whether the chord pressed is a singlet. If so, there is a return to step 905. If the result of step 910 is no, then step 915 tests whether for the present couplet, the second part is being awaited. If no, step 920 determines whether it is the first part, and if no again, step 930 ignores the chord and returns to step 905 to await a further chord. If, however, it is the first part of the chord, then step 925 sets the "waiting" flag and returns to step 905. hi step 915, if yes, step 935 determines whether the chord is the second part, and if not loops to step 905, but if so, step 940 clears the "waiting" flag and sends the couplet press.
Table 1 , included hereinafter in Appendix A for illustrative purposes, is consistent with the flow diagrams of Figs. 8 and 9. The coding is written to run on the program ToolBook™ version 3.0, which is a Window™-based multi-media authoring tool published by Asymetrix Corporation of the United States. The code is commented, as indicated by the prefix "--".
A need exists to provide a method and means for implementing an improved human/machine interface compared to existing arrangements. A need also exists to provide a chordic engine for implementing a human/machine interface that is capable of an increased number of predefined chords or chord sequences compared to existing arrangements.
SUMMARY
Aspects of the present invention provide a chordic engine, a method and a computer program product for identifying predefined chordic commands in keypress data generated by a user. The chordic engine comprises input means for receiving keypress data, processing means programmed to identify at least one predefined chord in the keypress data, obtain a parametric value relating to the at least one predefined chord, and identify a predefined chordic command based on the at least one identified chord and the parametric value. The processing means of the chordic engine is programmed to embody the method for identifying predefined chordic commands in keypress data generated by a user.
The parametric value can be representative of a time duration of a keypress action, an amount of pressure exerted by a user in a keypress action and/or an amount of displacement of at least one key in a keypress action. The input means of the chordic engine preferably includes n input lines and the number of predefined chord sequences can exceed 2n - 1 on account of the foregoing parameters.
The chordic engine may further comprise output means for outputting data representative of a predefined chord sequence. The output data may comprise predefined input data for a receiving device.
The processing means of the chordic engine may further be programmed to determine time durations between discrete key presses that form part of a chord, compare the determined time durations with corresponding stored time durations, and identify a user based on the outcome of the comparison. The processing means of the chordic engine may further be programmed to receive and store a predefined set of chord sequences and a corresponding set of output data.
Other aspects of the present invention provide a chordic engine, a method and a computer program product for authenticating users of a chordic input system. The method includes the steps of receiving keypress data from a user, determining time durations between discrete key presses of the user that form part of a chord, comparing the determined time durations to stored time durations and authenticating the user if the difference between the determined time durations and the stored time durations is less than a predefined threshold. The chordic engine and computer program product for authenticating users of a chordic input system embody the foregoing method.
Further aspects of the present invention provide a chordic engine and a method and a computer program product for processing keypress data. The method comprises the steps of receiving keypress data from an input device having a plurality of keys assigned only to a specific single digit of a human hand, the keys being physically arranged to match the relative sequential relation of the digits; outputting a plurality of representations, wherein each representation represents an instruction or data input and is output in a manner to non-visually indicate which combination of one or more keys of the input device effect the instruction or data input, and thus which respective one or more keys are to be pressed; and identifying instructions or data embedded in the keypress data. At least one of the plurality of representations indicates in a non-visual manner that a simultaneous combination of digits effect the represented instruction or data input. The representations that non-visually indicate which combination of one or more keys of the input device effect the instruction or data input may comprise audible representations such as speech or musical chords. The musical chords may solely comprise harmonious sounds based on pentatonic scales or may comprise dissonant sounds. The representations that non-visually indicate which combination of one or more keys of the input device effect the instruction or data input may comprise movements or biological stimuli such as smell, temperature and/or electrical impulses.
The method may comprise the further step of outputting a non-chordic representation of the identified instructions or data input. The chordic engine and computer program product for processing chordic input embody the foregoing method.
The chordic engine may be implemented in software, firmware or as an Application Specific Integrated Circuit (ASIC).
BRIEF DESCRIPTION OF DRAWINGS
A small number of embodiments of the present invention will be described hereinafter, with reference to the accompanying drawings. In these drawings, Figs. 1 to 9 originate in the aforesaid Australian Patent No. 693553 and are useful for describing embodiments of the present invention.
Fig. 1 is a schematic block diagram of the broad hardware elements required to implement a human/machine interface; Fig. 2 is a schematic block diagram showing greater detail of the elements of Fig. 1 ;
Figs. 3a to 3d, 4 and 5 show sets of chords in accordance with various forms of representation; Figs. 6, 7a and 7b show respective views of a hand-held input device;
Figs. 8 and 9 are flow diagrams of the chording and couplet decoding methodology, respectively;
Fig. 10 shows a graphic representation of chord decoding based on chord duration, pressure or displacement; Figs. 1 1 a to Hf, 12a and 12b show general timing diagrams of software events raised by different chord actuation types;
Fig. 13 shows an example of chordic notation for nested containers in a Chordic Graphic User Interface (CGUI);
Fig. 14 shows the concept of context in a Chordic Graphic User Interface (CGUI); Fig. 15 is a flow diagram of the couplet decoding methodology implemented by means of a chord stack;
Fig. 16 shows a binary timing diagram of an example of a chord microstructure;
Fig. 17a to 17c show various ways of indicating the order of micro-structure keypress and/or key release; Fig. 18 is a schematic block diagram of hardware with which embodiments of the chordic engine can be practiced;
Fig. 19 is a flow diagram of a method for identifying predefined chordic commands in keypress data generated by a user;
Fig. 20 is a flow diagram of a method for authenticating users of a chordic input system; and
Fig. 21 is a flow diagram of a method for processing keypress data.
DETAILED DESCRIPTION
A human/machine interface can be implemented by means of a chordic input device connected to a chordic engine for decoding or interpreting a predefined set of chords or chord sequences generated by a user of the chordic input device. Such an arrangement may be used to implement a Chordic Graphical User Interface (CGUI), wherein connection of the computing platform to a display device provides a means for visually indicating to a user which fingers or keys to press in order to generate one or more defined chords to activate a particular function. In another arrangement, a static display is used to visually indicate to a user which fingers or keys to press in order to generate one or more defined chords to activate a particular function. Static displays provide static or unchanging information and include electronic displays, optical displays and printed text (eg. printed labels). In either of the arrangements described above, the visual indications of chords may include, but are not limited to, any one or more of the schemes shown in Figs. 3a to 3d, 4a, 4b, 5a and 5b. A further scheme for visual indication of chords includes representations of a human hand with certain fingers or digits extended and the remaining fingers or digits retracted, in accordance with the structure of each particular chord (eg. the chord \||.. is indicated by a representation of a hand with the thumb, first and second fingers extended, and the third and forth fingers retracted).
A chordic engine may be implemented in software for execution on a computing platform connected to a chordic input device. The particular computing platform used is limited only by the actual application environment and can comprise an off-the-shelf or a custom device or system. Furthermore, the chordic engine can be implemented as an
Application Specific Integrated Circuit (ASIC) for system or application 'design-in' purposes, or as a microprocessor programmed with software/firmware to execute the functionality of the chordic engine. The chordic engine may thus be integrated into a chordic input device.
Appendix B, hereinafter, contains assembler source code for a 5-bit chordic engine that receives inputs from 5 switches and outputs chord events via a serial port at periodic intervals. The chord events are in the form of bit patterns that represent the current state of the 5 switches. The source code in Appendix B has been written for a 16f84 PIC microprocessor target device.
The chordic engine may also be designed or configured to decode chord sequences and produce pre-defined data output for input to a particular device or type of device in accordance with industry or manufacturer's standards or pseudo-standards. An example is a programmable chordic input device, incorporating the chordic engine, for sending keyboard commands to a target device such as a computer terminal or personal computer (PC). Predefined commands are uploadable from the target device for storing in the programmable chordic input device or the chordic engine. An example of a predefined command is AIt-S in a Microsoft Windows™ program for saving a file currently being processed by a computer. Upon detection of the particular designated chord (eg. \||..), by the chordic engine in the chordic input device, the AIt-S command is sent by the chordic engine to the computer. The chordic engine can thus operate in what may be called a macro mode, whereby chords and chord sequences are translated into target device specific input messages. However, the chordic engine can be switched from the macro mode to a normal mode, whereby chordic values are output directly, without translation to a predefined input set. Other target devices requiring predefined inputs and/or commands include, but are not restricted to, mobile computer terminals, mobile telephones, television sets, video cassette recorders and video cameras.
A Chordic Input Device
A chordic input device typically includes a number of switches that can be in one of two states, "on" or "off. The transitions between these states comprise make and break, wherein make refers to closure of a switch (on), and break refers to opening of a switch (off). A press action or keypress is the act of pressing, making or closing a switch, whereas a release action is the act of releasing, breaking or opening a closed switch. Bi- state switches can thus be used to generate various combinations of event states, as are discussed hereinafter. It should be noted that the terms keys and switches are used interchangeably throughout this document. As a user operates a chordic input device, a stream of chord information is generated.
In a digital sense, an n-bit chordic system can provide an interface for any width of input. Practically, however, chordic systems are generally limited to from 3 bits to 14 bits. A relatively simple 3 -bit system assigns each of three digits of a human hand to three input switches. A 5-bit system assigns each of five digits of a human hand to five input switches and a 5+-bit system enables the thumb to selectively operate two or more input switches. A 10 bit system assigns each of five digits of two human hands to ten input switches and a ten+ bit system enables both thumbs to selectively operate two or more input switches.
In an analog sense, the size of the predefined chord set can be increased without altering the chord input device hardware by way of introducing a chord duration parametric value. For example, a user can generate different chords using the same key presses by controlling the duration of the key press. The chordic engine measures the duration and identifies the chord event or action intended by the user. Most users can manage two or three additional hold durations to distinguish between different chord events. Discriminating between more than two or three hold durations becomes impractical.
An example of a duration or chord hold algorithm that caters for three durations is now described with reference to Fig. 10. Release of keypress actions 1011 and 1012 occur as break events in event windows 1022 and 1023, respectively, while release of keypress action 1013 occurs in event phase 1024. Event phase 1024 is different to event windows 1022 and 1023 in that event phase 1024 is unbounded in time (upper bound). Each of keypress actions 1011, 1012 and 1013 involve the same keypress actions, but for different time durations. After a keypress action is released, a Chord Pressed event message is sent that defines the chord based on the actual keypress duration. No Chord Pressed event message will be sent if a chord is released prior to expiry of the minimum hold time 1021. Pseudo code for an example of a duration or chord hold algorithm is contained in Table 1 , hereinafter.
Table 1
Maximum Chord Value = 0 while the Chord Value of the Chord Press is greater then 0 then set Chord Activated to true when the Chord Press Hold Duration is greater than Hold Event Duration 1 and less than Hold Event Duration 2 then send Chord Down Hold Event 1 when the Chord Press Hold Duration is greater than Hold Event Duration 2 and less than Hold Event Duration 3 then send Chord Down Hold Event 2 when the Chord Press Hold Duration is greater than Hold Event Duration 3 then send Chord Down Hold Event 3 if Chord Value of the Chord Press is greater than Maximum Chord Value then set Maximum Chord Value to Chord Value of the Chord Press end if end while if the Chord Value of the Chord Press equals 0 and Chord Activated is true then set Chord Activated to false send Chord Pressed Maximum Chord Value
The number of predefined chords can also be increased by measurement of parametric values such as pressure and distance or displacement. Pressure or force can be measured in two ways - either during travel of the switch mechanism against a resisting force or by measurement of the final pressure applied when a switch mechanism is at the end of its travel. In a similar manner to keypress duration described hereinbefore, measured pressure can be used to overload a chord and thus indicate alternate events. In other words, a chord of a certain value will result in different events or actions, depending on how hard the chord is pressed. Distance can be measured by means of a resistance value that is proportional to the amount of displacement of a switch mechanism (eg. a linear potentiometer) and used in a manner similar to measured pressure. Such analog inputs can additionally be used to provide variations of activation difficulty differentiation and security mechanisms. Other analog transducers suitable for this purpose include capacitive structures that exhibit a change in capacitance or output voltage in response to an applied force or displacement, an example of which is a piezo-electric device.
An example of a pressure/distance algorithm for three pressure/distance trigger levels is now described. When a user presses a chord such that the applied pressure/distance exceeds a trigger value then the chord is considered triggered for that value. As the applied pressure/distance exceeds each further trigger value, further trigger value messages are sent. The maximum trigger value obtained in the duration of the chord press is recorded and sent on release of the chord. Typically the chording hardware would generate a value for pressure/distance in the range from 0 to 255 (ie. an 8-bit byte), where 0 corresponds to no pressure applied or distance traversed and 255 corresponds to the maximum possible pressure applied or distance traversed. The trigger values are set to appropriate values within the foregoing range. Pseudo code for the algorithm is contained in Table 2, hereinafter. For clarity purposes, the algorithm ignores the actual binary chord values and merely sends the trigger level values.
Table 2 while the pressure/distance value of the Chord Press is greater then 0 then set Chord Activated to true when the Chord Press pressure/distance value is greater than trigger value 1 and less than trigger value 2 then set the Maximum Trigger value to 3 send Chord Down trigger value 1 when the Chord Press pressure/distance value is greater than trigger value 2 and less than trigger value 3 then set the Maximum Trigger value to 2 send Chord Down trigger value 2 when the Chord Press pressure/distance value is greater than trigger value 3 then set the Maximum Trigger value to 2 send Chord Down trigger value 3 end while if the pressure/distance value of the Chord Press equals 0 and Chord Activated is true then set Chord Activated to false send Chord Pressed Maximum Trigger value set Maximum Trigger value to 0
Chord Events A chord event is a transitory message or signal that contains information pertinent to the most recent user action. Chord events are created or raised as a result of press or release actions. An action refers to an activity that takes place in real-time in the physical world, whereas chord events are derived from such actions via a set of defined rules. In the simplest form, a chord event contains at least the chord value associated with a chord action. However, a chord event may also comprise other information such as identification of a hardware device, an indication of left or right hand usage, timing data, and even rich information such as data relating to the pressure exerted in a keypress.
A make event is raised as a result of a cumulative transition in the switch state map. Thus, when a user presses an identifiable key or combination of keys, a make event is raised. Usually, a make event is eventually followed by a corresponding break event.
A break event is raised as a result of a subtractive transition in the switch state map. Thus, when a user releases a key, a break event is raised. Usually, a break event is preceded by a corresponding make event.
Singlet events are defined as chord events that comprise a single chord action (ie. a particular key or combination of keys pressed once by a user to effect a particular action). Singlet events are typically used for a wide range of single action controls and represent the basic building block for more complex sequential chord events. Both a make singlet and a break singlet can be made to occur instantly or after a delay.
Multiplet events, on the other hand, comprise two or more chord events that take place contiguously over time. The simplest example is that of a couplet, which requires a user to press a first specific chord followed by another specific chord to effect a particular action. 961 couplets are available from a five-key chordic input device. Other multiplets can be considered as multiples of the basic couplet transition. Whilst the direction of transition from a specific chord to another specific chord has no special significance for present purposes, such may have chordnomic value. In other words, transitioning from chord A to chord B may be easier or more difficult than transitioning from chord B to chord A.
Figs. 11a to Hf represent general timing diagrams of software events raised by different chord actuation types. The figures show the internal structure and sequence of actions and corresponding events over time. In each case, the vertical axis 1100 represents the state of the switches, whilst the horizontal axis 1101 represents time duration. The time axis is devoid of units as it represents changes of state rather than specific passages or periods of time. All the diagrams ignore any transitory intermediate chords formed as a result of pressing multiple switches. In other words, the diagrams can be considered to represent single key, or singleton chord, diagrams even though non-singleton chords may form them. As the diagrams document the most primitive activities available, any transitory intermediate chords created during the formation of a non-singleton chord press, if drawn will merely be permutations of the basic primitive chord actuation types, therefore it is appropriate to ignore them.
Fig. 11a shows an instant break event, the simplest of events. A user presses one or more keys to initiate a press action 1110. However, a break event is only raised when the user releases all the keys (ie. release action 1111). The break event is raised on the release action 1111, regardless of the length of the press duration 1112. This event type is the most commonly used event type, is typically used for general action buttons or controls, and has no error state.
Fig. l ib shows an instant make event. A user presses one or more keys to initiate a press action 1120 and a make event, having a chord value eC corresponding to the currently pressed combination, is immediately raised. On a subsequent release action
1121 (ie. the user releasing all keys), a make event of chord value eC = 0 is raised. The diagram of Fig. l ib ignores any intermediate chords that might occur and it is additionally assumed that the user does not release or press any additional switches for the duration 1122 of the press action. This event type has no error state and is typically used for controls that require immediate responses such as on-screen pointers.
Fig. l ie shows a break-after-hold event, which is similar to a break event.
However, there is now a minimum duration 1132 during which a press action 1130 must be maintained before occurrence of a release action 1131, to raise a break event. A break event is raised on the release action 1131. Release of the keys before the minimum required duration 1132 has elapsed will not result in a break event being raised. The pre- event duration 1133 is indicative of a time period during which a user could release the keys and consequently cause a break event to be raised.
Fig. Hd shows an error state for a break-after-hold event type. Release of the keys by a user, indicated by release action 1141, before the minimum duration 1142 has elapsed does not result in a break event being raised. This event type is typically used for controls that are to be protected against inadvertent activation by a user. The minimum duration 1142 is optionally configureable as a global CGUI setting. One side effect of introducing a minimum duration setting is the consequent delay introduced in the response time of the CGUI. However, as such an event type would most likely only sparingly be used, for potentially dangerous actions such as "Delete All" and the like, the delay effect may have minimal impact.
Fig. l ie shows a make-after-delay event. The essential difference between this event type and an instant make event, as shown in Fig. 1 Ib, is a delay 1154 between when a press action 1151 is made and the corresponding make event 1152 is raised, having a chord value eC corresponding to the currently pressed combination. On a subsequent release action 1153 (ie. the user releasing all the keys), a make event of chord value eC =
0 is raised. The make delay 1154 is optionally configureable as a global CGUI setting and can serve two purposes - to reduce the effect of intermediate chords that may be formed as multiple keys are pressed by a user and/or, in a similar manner to break-after-hold, to provide a measure of security against inadvertent activation.
Fig. Hf shows the error state for a break-after-hold event. Release of the keys by a user, indicated by release action 1162, before the minimum duration 1163 has elapsed does not result in a break event being raised. This event type is typically used for controls that require holding, for example a spinner control that continuously increments a value while held.
The events shown in Figs. 1 Ia to 1 If represent singlet events that comprise a single chord action. Multiplet events, on the other hand, comprise two or more chord events that take place contiguously over time. The most common multiplet events are couplets, which require a user to press a specific chord followed by another specific chord to initiate a specific action. While make couplets can be made to occur instantly or after a delay, additional possibilities exist with regard to break couplets.
Figs. 12a and 12b represent general timing diagrams of couplets. The figures show the internal structure and sequence of actions and corresponding events over time, hi each case, the vertical axis 1200 represents the states of the switches, whilst the horizontal axis 1201 represents time duration. The time axis is devoid of units as it represents changes of state rather than specific passages or periods of time.
A break couplet can be timed or untimed. Timed couplets impose a maximum time period between break of the first singlet event and make of the second singlet event and result in cancellation of the couplet initiated upon timeout error occurring. Untimed break couplets wait forever between break of the first singlet event and make of the second singlet event and result in cancellation on error, hi other words, the only event that will complete an untimed break couplet is a valid chord for the second singlet event. An invalid chord for the second singlet event will result in cancellation of the first singlet event.
A timed break couplet, as shown in Fig. 12a, comprises two consecutive singlet events with an intra-couplet gap 1260 between those events. Once a user has released the first singlet event of the couplet, at break event 1220, a limited amount of time is available in which to start the press action 1230 of the second singlet event. The end of this limited time period is indicated by the dotted line 1250. Failure to commence the press action 1230 within the time period available causes a timed gap test to fail and couplet timeout to occur. When press action 1230 occurs before the end of the available time period, the time between the break event 1220 and the make event 1230 represents the intra-couplet gap 1260.
This type of couplet can be used in a CGUI to implement a container control that contains one or more additional controls. Upon activation of the container, the next chord event, if raised, is addressed to the internal controls. For example, a pop-out menu that, on activation, displays normally hidden controls to a user for selection. After a period of time, the container control will conceal the controls and the chord context will collapse.
An untimed break couplet, as shown in Fig. 12b, also comprises two consecutive 5 singlet events with an intra-couplet gap 1265 between those events. However, once a user has released the first singlet event of the couplet, at break event 1225, no time limit is imposed before which the press action 1235 to initiate the second singlet event must occur. The time between the break event 1225 and the make event 1235 represents the intra-couplet gap 1265.
I Q
This type of couplet can be used in a CGUI to implement a container control that contains one or more additional controls. Upon activation of the container, the next chord event, if raised, is addressed to the internal controls. For example, a visible container with visible controls that, on activation, indicates to a user that the container is now the current is chord context and is thus available for selection. A more specific example is an onscreen keyboard that includes a separate numeric keypad. Pressing a particular chord firstly activates the container object that holds the set of numeric keys. Then, pressing a chord representing one of the numeric keys enters the corresponding number and deactivates the numeric pad container. 0
Chord Notation
A formalised chord notation is beneficial and necessary to define and describe the functionality of the chordic engine. In simpler chord-based systems, description of chords solely by corresponding chord values is usually sufficient. For example, the chord stream 5 12:24:15:18 represents consecutively generated chord no's 12, 24, 15 and 18 from the galaxy of 31 chords for a five-key input device as described hereinbefore. However, in relatively more complex chord-based systems, it is necessary to embed additional information in the chord stream. For example, the chord stream 12;240:24;180:15;300:18;195 represents consecutively generated chord no's 12, 24, 15 0 and 18, together with the time intervals 240, 180, 300 and 195, respectively, that the user either took to make the chord or held the chord down for. Still more complex chord-based systems require still more data in the chord stream. In a sophisticated multi-level chordic interface, as opposed to a flat five-key chordic interface as described above, the various layers of objects that the interface is composed of require a suitable and accurate notation. Such a notation describes the relationship between each object and its parent object and enables both the programmer and the chordic engine to maintain knowledge of the current and all possible future chord contexts. If a container panel corresponding to a root context and containing two action chords therein is represented as A, each of the chord actions within the container panel is designated Ac, where c represents a particular chord value. If the container A holds another container B, then the chord actions in the container panel B can be designated ABc, where c represents the particular chord values of those sub-actions. Thus, it is possible to annotate each sub-context within the framework of a super context.
As an example, Fig. 13 shows a container B, wholly nested within a container A. The container B contains two possible actions 1311 and 1312, represented by the chords _||.. (Cancel) and _..|| (OK), respectively. However, actions 1311 and 1312 are only available to a user once both containers A and B have been selected. Selection of container A requires activation of the couplet 1321 represented by chords \...| and \
Selection of nested container B requires activation of the singlet 1313, represented by the chord \||.. , after selection of container A. Thus, to activate the cancel chord 1311 in container B, a user is required to follow the chordic sequence: 1321, 1313 and 1311, represented by the chords \...|, \...., \||.., and _||.., respectively. On the other hand, activation of the cancel chord 1322 in container A requires that a user follows the sequence 1321, 1322, represented by the chords \...|, \... and _||.., respectively.
A Chordic Graphical User Interface (CGUI)
A conventional Graphical User Interface (GUI) embodies the concept of focus, which relates to a currently selected object being the active object. For the most part, focus is used in windows environments, for simple keyboard control. For example, in a Microsoft Windows™ environment, the keypress sequence ALT-TAB enables a user to navigate between windows. Within a currently active window, the TAB key enables transfer of the focus from control to control and the SPACE key enables alternate selection and deselection of the control that currently has focus. The keyboard arrow keys enable movement within sets of controls. Focus is implicit in the operation of a mouse or other pointing device in that the current focus is determined by the position of a user click or selection and will so remain until a different focus context, such as a different window, is clicked or selected. For the most part, focus is managed by the underlying operating system in a manner that is transparent to the user and causes little problem for the interface designer or programmer.
In a Chordic Graphical User Interface (CGUI), however, focus has much greater significance than in a conventional GUI. In a flat CGUI, that is a CGUI comprising only singlet chords, the chord focus is only necessary to move between chord contexts such as windows, if any. As soon as a chord context contains a couplet or other multiplet, chord focus becomes more complex and must be carefully managed. There are two main aspects to chord focus management. Firstly, the chord stream originating from a user must be managed and, secondly, the location in the CGUI where those chords are to be applied must also be managed. Various methods and data structures are possible whereby a sophisticated CGUI can achieve this. While several of these are described hereinafter, it should be understood that such methods and/or data structures can be utilised in differing ways or may even not be necessary at all, as would be understood by one skilled in the relevant art.
Thus, the chord focus is the set of controls that are currently active and that are the subject of any forthcoming chord events (ie. the chord focus context). In a simple flat CGUI, a window has chord focus and the contents of the window are the chord focus context. No control within the context of the window can alter the chord focus. It should be noted, however, that it might be necessary to track the most recently activated control, as this information is not implicitly retained. In other words, activating a singlet control never provides chord focus, even though a selection may be active for a period of time (eg. a spinner control held down for a certain duration).
The chord context is the set of current controls or components that will receive and/or be activated by the next chord event. Much like the concept of focus in a conventional windowing GUI, the chord context thus defines where the next input from the user will be directed. Unlike a conventional GUI, where input focus generally follows user selection by mouse click, maintenance of the current context in a chordic-based system requires accurate tracking of previous chords generated by the user. A chord context typically contains one or more chord controls or sub-contexts. In a complex interface there may be many levels of chord sub-context extending in a tree-like structure. A chord context at one level higher than the current chord context in the tree-like structure is termed the chord super context.
The highest level of chord context is the chord root context and is present upon initial activation of a chordic-based system. All other contexts are sub-contexts of the root context. An example of what a user would perceive as a root context in a conventional GUI such as Windows 98™ is the so-called "Task Bar". In a 5-bit CGUI, the chord root context represents a field in which the chord tree has 31 trunks. Any chord events occurring at this level must lead down one of the 31 trunks. In a simple flat CGUI, all chord events occur at this level or in this context. However, a complex CGUI typically comprises various sub-contexts. Emergence from a sub-context to the chord root context occurs when a global chord is generated, for example, to change windows.
As an example, Fig. 14 shows window A and subsequently activated window B.
Couplets 1411 and 1421 represent selection chords for windows A and B, respectively. The first chord of couplets 1411 and 1421, \...|, is common to both windows A and B and comprises a global "window selection" chord. The second chord of couplets 1411 and 1421 distinguishes between different windows and thus switches the current context to the window selected by the second chord in the couplets 1411 and 1421. Each such global "window selection" chord in a 5-bit interface can thus support a maximum of 31 different windows. Activation of the OK instruction in either of windows A and B requires activation of the chord _..||, however, the instruction will apply in the context of the currently selected window. No CANCEL instruction is available in window A and activation of the chord _||.. while window A is the current context will have no effect. A currently selected context (eg. a window) can be indicated to a user by highlighting, colour change, overlaying (eg. window B overlays window A in Fig. 14), or in any other convenient manner. The standard MS Windows™ commands for maximising, minimising and closing a window 1412 are displayed in the top left hand corner of windows A and B in Fig. 14. These commands are implemented as chords (maximise = \...., minimise = J... and close = _.|..) in the context of the currently selected window.
The chord tree is the set of all possible chord paths within a CGUI and is, in most cases, more a concept than an applied structure. A CGUI that maintains a chord tree data structure wherein each leaf of the tree is a chord control is possible, however, such may prove impractical in size and complexity. All 5-bit CGUI' s have a chord tree hyperspace that consists of 31 chord trunks at commencement of the CGUI. A portion of that hyperspace is filled with the actual chord paths that compose the onscreen expression of the CGUI.
Each chord in a chord tree has a differing degree of difficulty of activation by a user in relation to a particular chordic input device. Accordingly, chords may be assigned to chord press actions based on the degree of risk associated with the actual press action. For example, the action "next" can be associated with an easy-to-press chord such as \||.. and the more risky action "delete all" can be associated with a difficult-to-press chord such as
Vl-I-
A typical difficulty index for an experienced user of a 5-bit chord set, in order of increasing difficulty, is listed hereinafter:
31,15,16,08,24,12,28,04,30,14,07,17,01,20,03,02, 23,09,06,22, 19,25, 10, 18,27,26,11 ,29, 13,05,21.
A chordic engine that delivers a sufficiently rich interface should provide the ability to switch context, much like the change in input focus from window to window in a conventional GUI. Switching context requires the use of global chords. A global chord is a chord that is available at any point on the chord tree. An example in the MS- Windows™ environment is the CTRL-ALT-DELETE keypress sequence, which will invoke the task manager regardless of the current window context. A sub-global chord is a chord which is assigned lower down a chord tree and which is globally available at all levels below the assigned level. However, a sub-global chord is not available in adjacent chord contexts. One example of an action suited to a global chord is cancellation of an impending action. The chord \||||, in a five-key chordic system, has all bits set and possesses the unique property of being able to be generated regardless of the current chord being pressed or held. This chord is thus ideally suited for use as a global chord of high significance.
Chord partitioning is the isolation of one particular chord context from another. In other words, a chord path selected within a current chord context usually refers to the controls or sub-contexts contained within that chord context. In some cases, it may be appropriate to make the chord partition porous to certain chords. For example, certain chords could be assigned as global chords - such a chord is defined irrespective of the current chord context. Consequently, a global chord may not be useable at all points within the chord tree as such could result in a chord conflict due to the inability to determine whether a local or global action should be executed. It may be the case, however, that a specific rule is implemented in such circumstances, for example, local chord selection overrides global chord selection. Another case where chord partitioning might be made porous is that of a particular chord not being contained in a specific chord sub-context but in an adjacent sub-context. In such case, the chordic engine can permit activation of the particular chord in the adjacent chord sub-context. Thus, a chordic engine can allow a user to select a control in a set of controls other than the set that currently has chord focus without obliging the user to "back out" of the current chord sub- context and then "drill down" into the adjacent chord sub-context. Such selection is always possible in a mouse or pointing device environment because a user can switch context merely by clicking in another sub-context.
A chord collision is the assignment of the same two chord presses at a particular level of chord context. Generally, a chord context would be designed to avoid identical chords. However, collisions may occur in a complex chordic system that provides for global or sub-global chords, or has dynamic generation and assignment of chord values and chord paths. A chord lamella comprises the total set of possible chords currently available to the user and is not limited by the content of the current chord context. Whilst each dynamic chord context is effectively partitioned from other chord contexts to prevent or ameliorate chord collisions, global chords are valid across context boundaries and thus form a part of the global lamella and each current lamella.
A chord path is the set of chords required to be selected to enable a user to activate a particular control action. In other words, from any current state a user will have to select one or more specific chords to effect any action. In some chordic systems, prior knowledge of a control chord path may not be necessary. In others, however, prior knowledge of a control chord path may be an integral part of the ability to partition from adjacent chord contexts.
Chord Event Decoding Two types of actions primarily result from a chord event or a sequence of chord events. Either an action is executed (eg. the system beeps or a backlight is turned on) or a context change is made. In the latter case, the first chord of a couplet typically effects a context change such that the second chord of the couplet is made available. It is the second chord that actually delivers the effect of the couplet. It is also possible to utilise timing such that if a couplet is not completed within a specific interval, a reversal of the context change occurs and the couplet is effectively cancelled.
Audio prompting for and/or acknowledgment of a chord press can optionally be provided. For example, a user may hold down a chord to hear what the action will be if the chord is released. The user may then depress additional digits to generate a 'cancel' chord for the current environment. Similarly, audible or verbal confirmation can be provided after an action has been executed, as described hereinafter in this document.
Chord event chaining refers to allowing or disallowing a particular chord event to trigger another permissible event of a different type overloaded onto the same chord value. In other words, a button or control may be activated by more than one type of chord event of the same chord value. Typically chord events and chord values are assigned to controls and components within a chordic system in straightforward manner, whereby one chord value is assigned to one object. However, it is possible to "over-load" objects and assign combinations of event types with the same chord value to the same object, or different objects. For example, in the case of a spinner, the up control can be assigned to both an instant break event and a make-after-delay event of the same chord value. Thus, the spinner up button can be activated and held to increment the number continuously or, alternatively, by pressing and releasing the button before the make-after- delay event can fire, the spinner can be incremented once by a small value. It is undesirable to fire the instant break event after the make-after-delay event as this will result in the spinner incrementing by one more unit, even after the user has released the key. Thus, it is necessary to prevent the make-after-delay event from being fired by "swallowing" or "deleting" the make-after-delay event. The foregoing example demonstrates the necessity of having control over chord event chaining.
Chord linking refers to the association of specific chord sequences with specific actions or objects. Linking can occur either in a static manner at compile time or in a dynamic manner at run time. The registrant model enables a chord-enabled software object to register specific chord sequences and/or values with the chord-based system. Then, when a specific chord sequence is generated, the value is matched against the registered values and the appropriate action or object is notified. In a lookup model, on the other hand, the chordic engine maintains a set of lookup tables that are indexed according to chord value. When a particular chord is generated by a user, the chordic engine traverses the lookup table to determine the contents of the cell indexed by the chord value. Lookup tables are typically implemented by way of arrays.
Under certain circumstances, a user may wish to remove a key/digit from the chordic system, say, due to injury. In such case, the chordic engine algorithmically remaps all the chord press actions that utilise the particular key/digit.
As chord data is received by the chordic engine from a chordic input device and is acted upon, it is necessary to maintain knowledge of the previous chords received. For example, in the case of a couplet in a broadcast message model, the chordic engine sends out a message containing the first chord. As that chord will not be claimed by any object, the chord is pushed onto a chord stack. When the next chord is received by the chordic engine, the previous chord is popped from the chord stack and concatenated with the current chord, and another message containing both chords is broadcast for receipt by the appropriate object. The chord stack may contain numerous information such as chord value, event type, commencement, duration, etc. A chord stack is a dynamic Last-In-First- Out (LIFO) queue for storing the currently pending chords that will form the final chord path of a user's choice. As a user commences pressing chords from an initial state, each chord press is pushed onto the chord stack. Once the required control is activated, the chord stack is usually completely cleared. However, it may be that the control is part of a chord context sub-set that still retains chord focus. In such case, the last chord pressed is popped from the chord stack, thus providing retained knowledge of the prior chord events.
Fig. 15 shows a flow chart of a timed break couplet implementation that utilises a chord stack. Step 1505 represents waiting for a chord press, and once such is received, step 1510 determines whether the chord pressed is a singlet. If so, step 1515 sends the singlet pressed and there is a return to step 1505. If the result of step 1510 is no, then step 1520 tests whether for the present couplet, the second part is being awaited. If no, step 1525 determines whether it is the first part, and if yes, step 1535 ignores the chord and returns to step 1505 to await a further chord. If, however, it is not the first part of the chord, then step 1230 pushes the chord onto the chord stack and returns to step 1505. In step 1520, if yes, step 1540 determines whether the chord is the second part. If not, step 1545 pops the chord from the chord stack and loops to step 1505, but if so, step 1550 determines whether the intra-couplet gap is less than a specified gap. If not, step 1545 pops the chord from the chord stack and loops to step 1205, but if so, step 1555 pops the chord from the chord stack and sends the couplet pressed.
The chord history is a log of the chord data stream over a period of time. This history has several potential uses including a biometric record of user skills for training or teaching purposes and an implementation of macro functions whereby a user can record sets of chord events for replay. An advanced, adaptive chordic engine can use the chord history to alter the characteristics of the chordic system in accordance with a user's level of proficiency of operation.
A process of matching incoming data against a defined pattern is necessary to determine which object/s should receive a current chord event message. In the broadcast event model, the chordic engine broadcasts a message to all objects that can receive. When a targeted object recognises that the message is for that object, the object executes an action or command (eg. "save") and notifies the sender (typically the chordic engine) whether the reaction to the message is an action or a link to a different context. The chordic engine will respond accordingly - no specific action is generally necessary in respect of an action but a link will typically result in a context switch and an interaction with the chord stack. A lack of response to a broadcast message indicates that no object has claimed the chord event and/or that the target object does not exist in the currently available context - and can be treated as an error condition. The broadcast message model enables objects to be created and deleted without a need for maintaining a data table of chord objects. However, it is important that newly created objects do not introduce a chord sequence within their current context that conflicts with an existing chord assignment or a conflict may result. Such can be avoided by "hard coding", by the programmer, of chord assignments throughout the software application. In a dynamic environment, some form of error checking is required to avoid chord conflicts. Resolution of potential chord conflicts will require negotiation between the chordic engine and the new object to re-map the new object to an available chord sequence.
A different approach is to allow objects to register as "listeners" with the chordic engine. The listener event model has the advantage of the chordic engine assigning chord sequences to registering objects, thus eliminating the need for error checking by walking the object tree. When a chord event is raised, the chordic engine determines a matching object from the registered objects, if any, and sends a message to that object.
The movement of a user's fingers, as chords are formed and released on a chordic input device, provides the chord microstructure. As it is impossible for a user to discretely depress or release fingers exactly simultaneously, minute variations can be measured and recorded as chords are formed and released. Such data can be useful for purposes of security, user authentication and training. For example, the chord microstructure can be used to increase the potential key space of an encryption code. Higher levels of security can be obtained by analysing the spacing and elements of the 5 chord microstructure (ie. individual finger movements that comprise a complete chord). A microstructure notation, similar to the chord pattern notation described hereinbefore, can be employed.
The use of timing and values of certain chord sequences can also provide security io mechanisms. For example, a user can generate or be issued with a sequence of chord presses that possess a defined rhythm and/or timing. Analysis of such sequences in response to a challenge can yield deterministic or probabilistic authentication of a user. Advantageously, "passchords" that have similarly sized key spaces to conventional passwords can be significantly more difficult to imitate or convey to another party.
I5
Fig. 16 shows a binary timing diagram of an example of a chord microstructure for the chord sequence or couplet \|||. \||.. . Horizontal displays 1611 to 1615 relate to key activations by the thumb and 4 fingers, respectively, of a human hand. As can be seen from Fig. 16, the thumb key is activated first, followed by the third, second and first 0 fingers, respectively. The delays in discretely activating the various keys are classed as "minor make gaps" and release of the various keys is classed as "minor breaks". A minor break 1620 also occurs during activation of the second finger keypress 1613 but is remade during the intra-chord width 1630. After the inter-chord gap 1640, the second chord \||.. is activated by the thumb and first and second fingers. S
Figs. 17a to 17c show various ways of indicating microstructure keypress and/or key release order for the chord \|||. . Fig. 17a shows keypress order by way of colour or shading. Fig. 17b shows keypress order by way of height and Fig. 17c shows keypress and key release order by way of height. The convention employed in Figs. 17b and 17c is 0 that the keys are to be pressed and released in the order of increasing height of the indicator bars. Pseudo code for an example of an algorithm for microstructure analysis is contained in Table 3, hereinafter, which provides continuous coarse-grained authentication by monitoring specific chords and comparing the microstructure of those chords to a statistical history. As would be understood by a skilled person, the algorithm could simply be modified for user difficulty analysis and detection.
Table 3 valid_user = true; for each security chord for each micro_structure feature if micro structure feature duration ≠ required duration valid_user = false end: break: endif endfor endfor.
Non-Visual User Interfaces
Computer user interfaces currently rely primarily on visual haptic (touch) input and output. Although esoteric interfaces such as brain control are being developed, humans will continue to use touch, sound and vision for interacting with computers for some time to come. Screens, displays and the like require a user to look at them for interaction, which may be inconvenient or impractical in certain situations. Alternate communication channels when used in conjunction with chording, offer potential to further reduce the cognitive load on a user (thereby enabling the user's eyes to be maintained on a task being performed), reduce errors and generally improve performance. Such communication channels that are readily available include sound, vibration, movement and other biological stimuli such as heat, smell and electrical impulses. These channels have greatly differing characteristics and bandwidths. For example, smell and heat can only convey a very simple message, which may be cardinal in nature. An embodiment using electrical impulses comprises a wrist band with contact pads that deliver mild electric currents to stimulate a wearer's muscles. When adequately spaced, such stimuli can be decoded or resolved by the wearer into distinct patterns of information.
Vibrations or movement can be used to convey more complex information and more reliably than smell or temperature. For example, a vibrating chord pad can provide the user with feedback or confirmations by altering the frequency and amplitude of the vibrations to convey different messages. Whilst vibration is a reasonably robust communication channel that is unaffected by wind or ambient sounds, it has a narrow information bandwidth as most people can only discern and remember a limited number of different vibration patterns.
Sound, on the other hand, is the most effective communication channel in terms of data bandwidth and speed and can convey a rich and complex set of messages without requiring a physical connection to a user. Sound is typically used in conjunction with a visual display to convey or reinforce specific messages such as an error dialog or other system event. The use of sound allows a user's attention to be diverted from the display until such time as the computer alerts the user to an event (e.g., a beep when a long running process has finished). However, sound has a more important role in a non- visual interface. A common example of a non-visual interface is a spoken telephone menu (e.g., "Press 1 for support"). A user presses appropriate buttons on the telephone in response to menu options or choices that are announced. These are prescriptive indications, in that prescribed options are announced to a user before choices are made by the user. Sounds can also be used in a descriptive manner to alert or confirm to a user what choice has been made. A typical use of a descriptive sound in a visual interface is a "beep" or other error sound played when a user makes an incorrect or invalid choice. In a non-visual interface, sounds are used to indicate both choices and confirmations or corrections.
Non-visual chordic interfaces require that users recognise chords associated with certain actions. For example, a chord pad connected to a computer without a display and that uses sound as a feedback channel requires users to know and/or remember which chord or chords are required to access the top level of the sound menus. The chord-action associations either have to be learnt by a user, statically displayed (e.g., on a sticker or decal), informed in a continuous loop, or informed at the beginning of the user's interaction. In the case of a spoken telephone menu, user interaction commences when the menu service answers a user's call and it is at this point that top level menu options can be announced. In another example, sensors in a chord pad can detect when a user has grasped the chord pad and the top level menu options can be indicated. Menu options can be indicated using speech or non-speech sounds. An example of a speech indication is the phrase, "Press your thumb for help", which informs a user that pressing and releasing the thumb key will result in a help menu. An example of a non-speech indication is assignment of a sound effect to each finger, hi the foregoing "Press your thumb for help" example, the thumb sound would be played along with the spoken phrase "for help". An advantage of using non-speech indications is that sounds can be produced more quickly than speech, thus reducing the latency. A disadvantage of using non-speech indications is that the user is required to know and understand the coding method used as it is not possible to direct the user to particular menus using non-speech cues unless the user has prior knowledge of what the cues mean.
Audio cues for chordic systems may be categorised into two classes, namely prescriptive and descriptive sounds. Prescriptive sounds are those sounds that describe or convey to a user which chord should be pressed to effect a desired action. A non-chordic example of this is the voice telephone menu referred to hereinbefore, wherein a human voice lists available options and corresponding telephone keys for selecting those options. A similar system can be practiced in a chordic environment. Available options together with which finger or fingers to press for each option are announced by a voice. More sophisticated systems can announce which chords to press by associating a unique sound with each chord or by assigning a sound element to each digit of the hand and constructing a composite sound comprising these elements. A simple example of the latter technique is a pentatonic scale, wherein musical chords can be assembled using up to five different tones. The pentatonic scale method, whilst useful, can only effectively convey a subset of the available chords as certain chords are difficult for a listener to distinguish from others. Dissonance can be used to ameliorate this problem. That is, rather than using harmonious sounds to represent chords to be pressed, dissonant sounds can be used. The advantage of using dissonance is that the range of available chords that can be represented is no longer constrained by the limits of the ear to distinguish between differing but seemingly similar harmonious sounds. Descriptive sounds are used during or after chord presses to provide feedback to a user. For example, telephone keypads produce a different tone for each button pressed. As a user becomes familiar with certain telephone numbers the tones act as an error correcting cue, as a particular sequence of tones becomes associated with a particular number, hi a similar manner, a user of a chordic input device can receive sounds that provide feedback about the correctness of the chord being pressed, hi a non-visual environment, this allows a user to control chordic software and obtain feedback about current chord presses. As a user becomes familiar with the software being used, certain sounds will become associated with actions, thus providing a separate error checking method for a non-novice user.
The foregoing discussion relating to prescriptive and descriptive sounds is directed to actual chord presses. However, similar techniques can be applied to content, whereby menus and available operations are conveyed to a user/listener by spoken language or a defined mapping of sounds to actions. For example, the command "Save" might be associated with a particular non-language sound. This technique can be used for both prescriptive and descriptive situations. The foregoing methods can also be practiced using a system of low frequency vibrations or other tactile sensations.
Speech, whilst being immediately understandable, has some limitations such as latency, that is, the time taken to actually say things. Additionally, recognition of spoken words can be difficult under certain circumstances, for example, in the face of large amounts of background or ambient noise. Other non-visual interfaces, whilst not having the immediacy of understanding that speech has, can deliver various other desirable attributes such as robustness and speed, hi all cases, however, the non-visual interface must be learnt so that the information contained in the interface can be understood and acted upon as required. As the non-visual interface usually contains little in the way of conventional cues that would be generally known by a member of the public, all parts must be learnt by the user. Typically this would be by means of a visual interface that would indicate to the user the associations between the various cues such as sounds, and their meanings. Two fundamental mechanisms for doing this are an ad-hoc memory exercise, wherein the user is obliged to learn each chord and command or effect, and the elucidation of a vocabulary and grammar that allows the user to understand any "sentences" that he may encounter. In other words, and by way of example, a particular sound could be associated with each finger such that the user would know which fingers to press as a result of hearing one or more of the five sounds played sequentially. Similarly, the elements of the menu structures and navigation such as "home, next, window," and so forth could be defined in such a manner that the user can understand groupings of these sounds.
Users must learn or memorise a non-speech, non-visual interface. For many situations a few commands may suffice. These would be readily learnt by a user and a simple means of instruction would be adequate. For more complex interactions (e.g., because of a larger menu structure with more options or a reduction in the input device bandwidth from five bits to three bits) that necessitate multiple multiplet chords, a more sophisticated teaching method is required.
A memorised interface may or may not provide feedback about the results of actions selected by a user. However, no direction is provided about available options and associated chords, whether by sound or any other means. In contrast, a cued non-visual interface provides a user with cues and information about the choices currently available. Whilst the user will have learnt elements of the vocabulary and grammar of the sounds being used, the user can be presented with new combinations of the vocabulary pertinent to the current state of the system. However, the user is not obliged to learn all the permutations available. Thus, a cued interface can provide new aspects to the interface in an ad-hoc manner, whereas a memorised interface cannot.
Certain user commands that have serious consequences (e.g., "delete all files") can be made difficult to execute inadvertently. For example, in a three-button chordic input device that is controlled by the thumb and adjacent two fingers, the thumb may be required to be held down whilst the other two fingers perform certain actions. Using the text glyph notation described hereinbefore, the command "delete all files" can be represented as \..(.|. ..|). The brackets indicate that the contained chords (index and middle fingers, respectively) must be executed whilst the thumb is held down. On release of the thumb, all files will be deleted. This provides a simple and compact way to represent complex finger actions whether by means of static displays or active displays. The notation can be extended to encompass the entire range of micro event possibilities, hi practise, however, only a subset of the notation is used as the representations become clumsy if taken to completeness for any given type of chord event. This has relevance for non-visual interfaces on account of the possibility of displaying a practically unlimited sub-set of the available chord event space to a user. Thus, the fundamental relationship between any non-speech, non-visual interface and a preceding visual interface can be serviced well beyond the capacity of a user.
An underlying concept of the chordic engine is that an action is triggered as a result of what the user understands to be the initiation of that action. The user sees, feels or hears a chord and its resulting outcome or command and initiates that outcome or command by way of generating a chord event. The chord event can range from the simplest break chord to a complex "finger dance". The chordic engine must be capable of both interpreting and transmitting that event, which requires a description of what the user is to do, is doing, or did in real time and a comparison against a database of chord events and associated resulting commands or outcomes. Given the inability of humans to consistently operate their fingers with microsecond precision, any interpretation of human actions have to filtered and normalised. This is typically done by means of notch filters, wherein the timings are matched to a cardinal scale. These normalised events can then be compared to the database of defined chord events. The primary emphasis of the chord fingering notation described hereinbefore is for human readability. However, a higher level "chord event description language" could be designed that would enable machine parsing and correct disambiguation.
The chordic engine provides services to a user interface. Such services can be delivered to both a user interface engine or an actual user. One such service that the chordic engine provides to a user is the ability for the user to alter the current chord state (continually and revocably). In other words, the user may choose from a known set of chord mappings such that a chordic input device can be configured to respond from initialisation in a particular and desirable way. For example, in one embodiment, pressing a particular chord sequence such as \||||++ \ (which is interpreted as hold the whole hand down for two beats, release, then press the thumb) results in the chordic engine embedded in a chordic input device switching modes. The concept of chord "beats" is described hereinafter in this document. The individual modes available may be predefined internally in the chordic engine or may be defined or customised by a user and subsequently uploaded to the chordic engine. The most primitive mode is the basic native keyboard, wherein each switch under the user's fingers acts as if it were one of the SPACE, J, K, L, and ; keys on a normal QWERTY keyboard. Additional modes may include specific keyboard key presses or sequences thereof. A user could thus construct a group of commands tailored to an application that is used frequently. The chordic engine can be instructed as to the permanency of a particular mode by way of storing mode values in EEPROM or other non-volatile memory, thus enabling the chordic engine to remain in that mode until instructed otherwise. Implementation of such modes requires storage, in a mode effect table, of information associated with the individual elements of each mode, that is, the chord events that result in a desired action. Storage of this information for a small flat chordic interface (i.e., no multiplet chords) is trivial. Each action that results from a chord release can be stored as information in an array of 25-l = 31 cells. The action corresponding to a chord may be readily obtained by using the chord value as an index into the array. This technique may also be applied for couplets. As the array now comprises approximately nine hundred values, the first and second halves of the couplet are used as separate indexes into the array (e.g., a 2-dimensional array). Even if most of the cells are occupied (i.e., most chord combinations are defined), the computing memory requirements remain modest. However, as chord depth is further increased, the memory requirements quickly become excessive and it is logically impossible to build a data structure that can accommodate an infinite depth of chords. Thus, a data system that is rich enough to accommodate all possible chord activation events and also scale robustly, is necessary. In general terms, it is not possible to map each cell to a particular chord event based on the value of an event, rather, each cell can contain any chord event. The software routine that accesses the effect of the event does so by "walking" the mode effect table, that is iterating through the table. Upon discovery of an entry for a chord event that matches the current chord event (including any prior events such as the first part of a couplet) an appropriate or corresponding action, if any, can be taken. If the current mode effect table does not contain an entry for a current chord event, such can be considered in most cases to constitute an error.
A "chord beat" is a measure of user interaction time in a chordic system. By algorithmic means, the delays users exhibit whilst pressing chords are measured. In some systems, the beat is an ordinal, non-linear measure. An example of a beat time mapping follows: one beat is less than one second in duration, two beats is between four and five seconds duration, and three beats is anything more than ten seconds. This provides a means of readily mapping a learnt chord duration to a simple visual notation (in this case, a "+" indicates one beat). The use of beats enables overloading a chord shape or pattern of the fingers with more than one action, whereby the additional actions are accessed by the duration of time the chord is held before release. The foregoing beat notation enables a user to readily learn particular chord presses from a visual display, which knowledge is then readily used in a non-visual environment. Audio prompts may be used, if necessary, to assist the user in the non-visual environment. For example, changes in the pitch of a sound that is associated with a particular chord may be used to indicate how many beats (if any) that chord must be held for to activate an associated action. The sequencing or direction in which the chord is made (i.e., the ordering of the individual finger presses) can be annotated visually and/or indicated aurally in a similar manner.
Chordic Engine Hardware
Fig. 18 is a schematic block diagram of a device for implementing a 5-bit embodiment of the chordic engine. The device 1800 includes a processor or processing means 1810 and a communications interface 1820. The processor or processing means 1810 is preferably implemented using any microprocessor or microcontroller, but can also be implemented by means of a state machine or other discrete circuit elements that are known in the art of electronic circuit design. Commonly available microprocessors and microcontrollers include features such as on-board memory (eg. random access memory (RAM), read-only memory (ROM) and rewritable memory (EEPROM, flash memory, etc.), timers, counters and communication interfaces, all of which can be used to implement embodiments of the present invention. The processor or processing means 1810 is connected to, and communicates with, the communications interface 1820 by means of a bus or other internal communication link 1830.
The processor 1810 includes 6 input lines (pins 1 to 6), including a common (COM) 5 and input lines Ii to I5 for receiving chord from a chordic input device (not shown). Input lines Ii to I5 map onto the thumb and 4 fingers of a human hand.
The communications interface 1820 receives data from the processor or processing means 1810 via the bus or other internal communication link 1830 and outputs data in a io specific format (eg. serial data) via output data line Dout (pin 11). The input data line Djn is typically unused.
In certain embodiments, the device 1800 includes one or more input and/or output interfaces (not shown in Fig. 18), each of which may be analogue or digital. The input i5 interface/s may be used to deliver additional information to the chordic engine. Such information includes the unintentional activation of keys on a chordic input device. One or more sensors or switches located on the chordic input device detect when a user is in contact with the chordic input device and signal this information to the chordic engine via an input interface. Thus, if a user is not holding the chordic input device and keys are
20 activated by the device bumping against things, the chordic engine is able to identify unintentional key presses.
Input devices are coupled to the chordic engine via one or more input interfaces. For example, sensors and/or switches that detect which side of the chordic input device is being held by a user provide such information to the chordic engine via one or more input s interfaces. This enables the chordic engine to detect which hand the user is holding the chordic input device with. Sensors and/or switches can also be used to detect vibration, movement and orientation. Data from these sensors and/or switches is also delivered to the chordic engine via the one or more input interfaces and can be used to alter certain characteristics such as increasing the hold times for chords when the user is in a high 0 vibration environment.
Output devices are coupled to the chordic engine via one or more output interfaces. For example, an audio speaker can be driven by the processor or processing means 1810, via an audio output interface, which may be integrated in the processor or processing means 1810. A vibration generator, which can be practiced by a weight eccentrically mounted on the shaft of a small electric motor, can be driven by the chordic engine via an output interface. By providing power to the electric motor, the shaft spins and the eccentricity of the weight causes vibrations whose amplitude and frequency are related to the rotational speed of the motor. Thus smaller or larger vibrations and bursts of vibration can be delivered to a user who is holding the chordic input device by applying varying bursts of current to the electric motor under software control.
In one specific embodiment for executing the computer program code for a 5 -bit chordic interface contained in Appendix B, the processor 1810 comprises a PIC16f84 microprocessor device and the communications interface comprises a MAX3222 UART.
In another embodiment, the device 1800 comprises an Application Specific Integrated Circuit (ASIC), which can be integrated into apparatus such as a chordic input device. The processor or processing means 1810 and communications interface 1820 can comprise any components available to an ASIC designer that are functionally and economically suitable. The ASIC need not be restricted to two components; a single component or multiple components may suffice, as the case may be. As will be appreciated by one skilled in the art of electronic circuit design, numerous possibilities exist for embodying the present invention in electronic hardware. The embodiments described hereinbefore with reference to Fig. 18 are thus not intended to limit the scope of the claimed invention.
Chordic Engine Methods Fig. 19 is a flow diagram of a method for identifying predefined chordic commands in keypress data generated by a user. Keypress data is received at step 1910. At step 1920, at least one predefined chord in the keypress data is identified. A parametric value relating to the predefined chord is obtained at step 1930 and a predefined chordic command is identified based on the at least one identified chord and the parametric value at step 1940.
Fig. 20 is a flow diagram of a method for authenticating users of a chordic input system. Keypress data is received from a user at step 2010. Time durations between discrete key presses of the user that form part of a chord are determined at step 2020. The time durations determined in step 2020 are compared to stored time durations at step 2030. The user is authenticated at step 2040 if the difference between the determined time durations and the stored time durations is less than a predefined threshold.
Fig. 21 is a flow diagram of a method for processing keypress data. Keypress data is received from an input device at step 2110. The input device has a plurality of keys assigned only to a specific single digit of a human hand, the keys being physically arranged to match the relative sequential relation of the digits. At step 2120, a plurality of representations are output in a manner to non- visually indicate which combination of one or more keys of the input device are required to be pressed to effect particular instructions or data input. At least one of the plurality of representations indicates in a non-visual manner that a simultaneous combination of digits effect the represented instruction or data input. Instructions or data embedded in the keypress data are identified or decoded at step 2130.
Embodiments of the present invention have immediate application in relation to conventionally-understood computing devices, but equally could be incorporated into viewfinder control systems (eg. camcorders), autotellers, fax and photocopy machines, embedded interfaces (eg. vending machine maintenance interfaces), mobile telephones and pagers, portable machinery (eg. service equipment), video and television remote controllers, interactive television controllers, personal stereo, household appliances (eg. clock radios), vehicleonics (eg. air conditioning, cruise control, stereo, cabin controls, etc.), powertool interfaces (eg. pistol grip-style drills) control mounted (eg. steering wheel or joystick), game and toy interfaces, electronic slate-style magazines, pocket organisers, hand-held computing devices, notebook and pen computers.
Various alterations and modifications can be made to the methods and arrangements described herein without departing from the scope and spirit of the invention, as would be apparent to one skilled in the relevant art. APPENDIX A
-allow external objects to change KordOS to set KSQwertyOn to offOn -internal settings system QwertyOn -this setting allows normal KBD set QwertyOn to offOn end to set KsreminderON to offOn system remainderOn -allows non-chordal KBD presses set remainderOn to offOn end to set KSChordingOn to offOn system ChordingOn -Chording master on/off switch set ChordingOn to offOn end to set KSQwertyOnly to offOn system QwertyOnly —locks out chords set QwertyOnly to offOn end to set KSPressReleaseType to Pval system pressReleaseType —chord formed instantly or on release set PressReleaseType to Pval end to handle Keydown vkey -handle the ToolBook (TBK) message -message Keydown ie. A KBD key system remainderOn -declare system vars for this handler system QwertyOn system QwertyOnly system ChordingOn system pressRelease Type get ChordingOn -check if on, skip if not ifit = "false"then forward break end system chordList[31 ] -initialise left & right hands
RT = keyspace
Rl = keyj
R2 = keyk
R3 = keyl
R4 = keyseimicolon
LT = keyspace
Ll = keyf
L2 = keyd
L3 = keys
L4 = keya
Vvkey = lowercase(vkey) -filter upper case of message parameter if vvkey = RT or Wkey - Rl or Wkey = R2 or wkey = R3 or wkey = R4\ or vvkey = LT or wkey = Ll or vvkey = L2 or vvkey = L3 or wkey = L4 then
--check if it's a Kording key on KBD 5 set chord to 0 —reset local variables set thumbPressed to false set firstPressed to false set secondPressed to false set thirdPressed to false set fourthPressed to false if pressReleaseType = "WaitForRelease" then —check how the chord is made do if keyState(RT) is "down" or keystate(LT) is "down" -keyState() is a TBK
10 func'n set thumbPressed to true end if keyState(Rl) is "down" or keystate(Ll) is "down" set firstPressed to true I5 end if keyState(R2) is "down" or keystate(L2) is "down" set secondPressed to true end if keyState(R3) is "down" or keystate(L3) is "down" 20 set thirdPressed to true end if keyState(R4) is "down" or keystate(L4) is "down" set fourthPressed to true end S until keyState(RT) is "up" and keyState(Rl) if "up"\ and keyState(R2) is "up" and keyState(R3) is "up"\ and keyState(R4) is "up" if qwertyon = ' false ' get flushmessagequeueO -clear the KBD buffer end end if pressReleaseType = "instant" then -the other way of chording pause 15 -debounce other keys 0 if keyState(RT) is "down" or keystate(LT) is "down" set thumbPressed to true end if keyState(Rl) is "down" or keystate(Ll) is "down" set firstPressed to true 5 end if keyState(R2) is "down" or keystate(K2) is "down" set secondPressed to true end if keyState(R3) is "down" or keystate(L3) is "down" 0 set thirdPressed to true end if keyState(R4) is "down" or keystate(L4) is "down" set fourthPressed to true end S if qwertyon = "false" get flushmessagequeueO end end if fourthPressed is true --build raw chord value set chord to 1 5 end if thirdPressed is true chord = chord + 2 end if secondPressed is true 10 chord = chord + 4 end if firstPressed is true chord = chord + 8 end l S if thumbPressed is true chord = chord + 16 end if chord = 0 then —internal error trap get flushmessagequeueO break end if qwertyon = 'true' then —forward singleton chords
20 if chord = 1 or chord = 2 or chord = 4 or chord - 8 or chord = 16 then forward break end end 5 get flushMessagequeueO -clear KBD buffer chordval = chordlistfchord] -look up the content of raw chord send kordPressed chordval to this Page -send message to graphic Page
-a TBK structure -page forwards message to all -objects on it else —end of test for non- finger chars
-below we handle " " if remainderOn = "true" then —they want all non-chord values forward 0 else get flushMessagequeueO —another obsessive clear end end end 5 to handle enterbook -initialise this system book on startup send resetKords —sends message resetKords end 0 to handle resetKords —set the raw chord contents
—put otherChords = " "here Handchords = "1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31" 5 dotChords =
Figure imgf000045_0001
system chordList[31 ] fill chordlist with handChords in [item] order system remainderOn --declare system vars for this handler system QwertyOn system QwertyOnly system ChordingOn system PressReleaseType set qwertyOn to "false" --reset to known state set remainderOn to "true" set QwertyOnly to "false" set ChordingOn to "true" set PressReleaseType to "WaitForRelease" end --this is a typical button handler, it receives the message from the page -this script is contained to handle Kordpressed kordval if kordval = myKordVal of self —check the parameter of the message
—my predefined attribute, ie. is it me?
— **** perform some action end end to handle buttonClick -allows mouse clicks send Kordpressed byKordVal to self —trick me that a key is pressed end
APPENDIX B
; KEYPAD.ASM - for use with KordPad circuit ; list p=16f84
#include pl6f84.inc
_CONFIG _CP_ON & _WDT_0N & _PWRTE_0N & _XT_OSC _IDLOCS H'9810' ENABLE_ equ 0 ; enable R_OUT on MAX3222 (RAO)
SHUTDN_ equ 1 ; shutdown T OUT on MAX3222 (RA 1 )
DATAIN equ 2 ; data input line (RA2)
DATAOUT equ 3 ; data output line (RA3)
CONFIGA equ B'OOOOOIOO' ; data direction for PORTA CONFIGB equ B'l 1111111' ; data direction for PORTB
DATA_ equ H'OOOC ; temporary register
COUNT_ equ H'OOOD' ; temporary register org O ; main program vector goto Start ; goto main program org 4 ; interrupt service routine vector goto Servicelnt ; goto service interrupt routine
Start clrwdt ; clear WDT to prevent RESET call Initialise ; initialise PORT A/B call Resetlnterrupts ; and setup interrupts Loop clrwdt ; clear WDT ; sleep goto sleep (& clear WDT) nop ; goto Loop ; infinite loop
»
Servicelnt bcf INTCON, RBIE ; mask/disable RB interrupt bcf INTCON, INTE ; mask/disable INT interrupt clrwdt ; clear WDT to prevent RESET btfsc INTCON, RBIF ; interrupt on RB7:4 ? goto ServiceWakeup ; if yes, service WAKEUP btfss INTCON, INTF ; interrupt on RBO/INT ? goto Resetlnterrupts ; else reset interrupts
ServiceWakeup bsf STATUS, RPO ; select BANK 1 clrwdt ; clear WDT movlw B' 10111000' ; WDT prescaler = 1 :1 (~18ms) movwf OPTION REG ; and INT on falling edge bcf STATUS, RPO ; select BANK 0
RepeatKeys sleep ; goto sleep (~20ms) - WDT wake up comf PORTB, W ; read PORT B (complemented) btfsc STATUS5 Z ; if Z=O, send data goto FinishUp ; else send 0 and reset interrupts call SendData ; send data goto RepeatKeys ; repeat until no keys pressed
FinishUp call SendData ; send data (0 in this case)
Resetlnterrupts bsf STATUS, RPO ; select BANK 1 clrwdt ; clear WDT movlw B'101111111 ; WDT prescaler = 1: 128 (~2.3s) movwf OPTION REG ; and INT on falling edge bcf STATUS, RPO ; select BANK O clrf INTCON ; clear all INTs & FLAGS bsf INTCON, RBIE ; unmask/enable RB interrupt bsf INTCON, INTE ; unmask/enable INT interrupt retfie ; i return & enable global interrupts
10
SendData ; sends data (from W) movwf DATA ; move W to DATA bcf STATUS, C ; clear CARRY clrf COUNT ; clear temp (COUNTJ
I 5 rlf DATA , F ; A BCDH GFEO rrf COUNT , F ; O AOOO 0000 rlf DATA , F ; B CDHG FEOO rrf COUNT , F ; 0 BAOO 0000 rlf DATA , F ; C DHGF EOOO
20 rrf COUNT , F ; 0 CBAO 0000 rlf DATA , F ; D HGFE 0000 rrf COUNT , F ; 0 DCBA 0000 swapf COUNT_, W ; 0000 DCBA iorwf DATA . F : HGFE DCBA 5 bsf PORTA, DATAOUT ; send stop bit bsf PORTA, SHUTDN_ ; enable MAX3222 call Delayό ; delay to let MAX3222 stabilise call Delay8 ; ..
30 bcf PORTA, DATAOUT ; send start bit nop ; (nop needed for timing) movlw H'08' ; 8 bits to send movwf COUNT_ ; initialise counter
RotRght rrf DATA_, F ; rotate right S btfsc STATUS, C ; check CARRY, skip if C=O goto Sendl ; if C=I goto send 1
SendO nop ; (nop needed for timing) bcf PORTA, DATAOUT ; else send 0 decfsz COUNT_, F ; dec counter, skip if 0 0 goto RotRght ; else keep rotating goto SndStop ; we skipped -> send stop bit
Sendl bsf PORTA, DATAOUT ; send 1 decfsz COUNT_, F ; dec counter, skip if 0 goto RotRght ; else keep rotating 5 goto SndStop ; we skipped -> send stop bit
SndStop nop ; \ nop ; > Delay3 (needed for timing) nop ; / bsf PORTA, DATAOUT ; send stop bit (twice) 0 call Delay8 ; delay for 1st stop bit call Delay7 ; delay for 2nd stop bit bcf PORTA, SHUTDN_ ; disable MAX3222 return 5 Delay8 nop ; delay 8 cycles (inc call/return)
Delay7 nop ; delay 7 cycles (inc call/return)
Delayό nop ; delay 6 cycles (inc call/return)
Delay5 nop ; delay 5 cycles (inc call/return) Delay4 return
Initialise clrf PORTA clear PORT A clrf PORTB clear PORT B bsf STATUS, RPO ; select BANK 1 registers movlw CONFIGA ; get CONFIG byte for PORT A movwf TRISA ; store CONFIG byte in TRISA movlw CONFIGB ; get CONFIG byte for PORT B movwf TRISB ; store CONFIG byte in TRISB bcf STATUS, RPO ; select BANK O registers bsf PORTA, ENABLE_ ; disable RECEIVERS (OK..TTL input) bcf PORTA, SHUTDN_ ; shutdown TRANSMITTERS bsf PORTA, DATAOUT ; send default data (STOP bit) return
END

Claims

CLAIMS:
1. A chordic engine for identifying predefined chordic commands in keypress data generated by a user, including: input means for receiving keypress data; and processing means coupled to said input means, said processing means programmed to: identify at least one predefined chord in said keypress data; obtain a parametric value relating to said at least one predefined chord; and identify a predefined chordic command based on said at least one identified chord and said parametric value.
2. The chordic engine of claim 1, wherein said parametric value is representative of a time duration of a keypress action.
3. The chordic engine of claim 1, wherein said parametric value is representative of an amount of pressure exerted in a keypress action.
4. The chordic engine of claim 1, wherein said parametric value is representative of an amount of displacement of at least one key in a keypress action.
5. The chordic engine of claim 1, wherein said input means comprises n input lines and wherein the number of predefined chordic commands exceeds 2n- 1.
6. The chordic engine of claim 1, further comprising output means for outputting data and said processing means is programmed to output data representative of said identified chordic command, wherein said output data comprises predefined input data for a receiving device.
7. The chordic engine of claim 6, wherein said output means broadcasts said output data to a plurality of receiving devices.
8. The chordic engine of claim 6, wherein said processing means is programmed to register a plurality of receiving devices, and wherein said output data is broadcast to a specific one of said plurality of registered receiving devices.
9. The chordic engine of claim 6, wherein said processing means is further programmed to receive and store a set of predefined chordic commands and a corresponding set of output data.
10. The chordic engine of claim 1, wherein said processing means is further programmed to: determine time durations between discrete key presses that form part of a predefined chord; compare said determined time durations with corresponding stored time durations; and identify a user based on an outcome of said comparison.
11. The chordic engine of claim 1 , implemented as an Application Specific Integrated Circuit (ASIC).
12. An apparatus for authenticating users of a chordic input system, including: input means for receiving keypress data from a user; timing means for determining time durations between discrete key presses of said user that form part of a chord; and processing means for comparing said determined time durations to stored time durations and for authenticating said user if the difference between said determined time durations and said stored time durations is less than a predefined threshold.
13. A method for identifying predefined chordic commands in keypress data generated by a user, said method including the steps of: receiving said keypress data; identifying at least one predefined chord in said keypress data; obtaining a parametric value relating to said predefined chord; and identifying a predefined chordic command based on said at least one identified chord and said parametric value.
14. The method of claim 13, wherein said parametric value is representative of a time duration said chord is activated for.
15. The method of claim 13, wherein said parametric value is representative of an amount of pressure exerted by a user in a keypress action.
16. The method of claim 13, wherein said parametric value is representative of an amount of displacement of at least one key in a keypress action.
17. The method of claim 13, comprising the further step of outputting data representative of said predefined chord sequence, wherein said output data comprises predefined input data for a receiving device.
18. The method of claim 13, comprising the further steps of: determining time durations between discrete key presses that form part of a predefined chord; comparing said determined time durations with corresponding stored time durations; and identifying a user based on an outcome of said comparison.
19. A method for authenticating users of a chordic input system, said method including the steps of: receiving keypress data from a user; determining time durations between discrete key presses of said user that form part of a chord; comparing said determined time durations to stored time durations; and authenticating said user if the difference between said determined time durations and said stored time durations is less than a predefined threshold.
20. A computer program product having a computer readable medium having a computer program recorded therein for identifying predefined chordic commands in keypress data generated by a user, said computer program product comprising: computer program code for receiving keypress data; computer program code for identifying at least one predefined chord in said keypress data; computer program code for obtaining a parametric value relating to said at least one predefined chord; and computer program code for identifying a predefined chordic command based on said at least one identified chord and said parametric value.
21. The computer program product of claim 20, wherein said parametric value is representative of a time duration said chord is activated for.
22. The computer program product of claim 20, wherein said parametric value is representative of an amount of pressure exerted by a user in a keypress action.
23. The computer program product of claim 20, wherein said parametric value is representative of an amount of displacement of at least one key in a keypress action.
24. The computer program product of claim 20, further comprising computer program code for outputting data representative of said predefined chord sequence, wherein said output data comprises predefined input data for a receiving device.
25. The computer program product of claim 20, further comprising: computer program code for receiving a set of predefined chord sequences and a corresponding set of predefined output data; and computer program code for storing said set of predefined chord sequences and said corresponding set of predefined output data.
26. The computer program product of claim 20, further comprising: computer program code for determining time durations between discrete key presses that form part of a predefined chord; computer program code for comparing said determined time durations with corresponding stored time durations; and computer program code for identifying a user based on an outcome of said comparison.
5
27. A computer program product having a computer readable medium having a computer program recorded therein for authenticating users of a chordic input system, said computer program product comprising: computer program code for receiving keypress data from a user; io computer program code for determining time durations between discrete key presses of said user that form part of a chord; computer program code for comparing said determined time durations to stored time durations; and computer program code for authenticating said user if the difference between said i5 determined time durations and said stored time durations is less than a predefined threshold.
28. A chordic engine, comprising: an input interface for receiving keypress data from an input device having a 20 plurality of keys assigned only to a specific single digit of a human hand, the keys being physically arranged to match the relative sequential relation of the digits; at least one output interface; and a processor coupled to said input interface and said at least one output interface, said processor programmed to: 5 output a plurality of representations, wherein each representation represents an instruction or data input and is output in a manner to non-visually indicate which combination of one or more keys of said input device effect said instruction or data input, and thus which respective one or more keys are to be pressed; and identify instructions or data embedded in said keypress data; o wherein at least one of said plurality of representations indicates in a non-visual manner that a simultaneous combination of digits effect the represented instruction or data input.
29. The chordic engine of claim 28, wherein said representations that non-visually indicate which combination of one or more keys of said input device effect said instruction or data input comprise audible representations.
5
30. The chordic engine of claim 29, wherein said audible representations comprise speech.
31. The chordic engine of claim 29, wherein said audible representations comprise I0 musical chords.
32. The chordic engine of claim 31, wherein said musical chords solely comprise harmonious sounds based on pentatonic scales.
is 33. The chordic engine of claim 31, wherein said musical chords comprise dissonant sounds.
34. The chordic engine of claim 28, wherein said representations that non-visually indicate which combination of one or more keys of said input device effect said 0 instruction or data input comprise movement.
35. The chordic engine of claim 28, wherein said representations that non-visually indicate which combination of one or more keys of said input device effect said instruction or data input are based on biological stimuli. 5
36. The chordic engine of claim 35, wherein said biological stimuli comprise one or more stimuli selected from the group of stimuli consisting of smell, temperature and electrical impulses.
0 37. The chordic engine of claim 28, wherein said processor is further programmed to output a non-chordic representation of said identified instructions or data input.
38. The chordic engine of claim 28, implemented as an Application Specific Integrated Circuit (ASIC).
39. A method for processing keypress data, said method comprising the steps of: receiving keypress data from an input device having a plurality of keys assigned only to a specific single digit of a human hand, the keys being physically arranged to match the relative sequential relation of the digits; outputting a plurality of representations, wherein each representation represents an instruction or data input and is output in a manner to non-visually indicate which combination of one or more keys of said input device effect said instruction or data input, and thus which respective one or more keys are to be pressed; and identifying instructions or data embedded in said keypress data; wherein at least one of said plurality of representations indicates in a non-visual manner that a simultaneous combination of digits effect the represented instruction or data input.
40. The method of claim 39, wherein said representations that non-visually indicate which combination of one or more keys of said input device effect said instruction or data input comprise audible representations.
41. The method of claim 40, wherein said audible representations comprise speech.
42. The method of claim 40, wherein said audible representations comprise musical chords.
43. The method of claim 42, wherein said musical chords comprise harmonious sounds based on pentatonic scales.
44. The method of claim 42, wherein said musical chords comprise dissonant sounds.
45. The method of claim 39, wherein said representations that non-visually indicate which combination of one or more keys of said input device effect said instruction or data input comprise movement.
46. The method of claim 39, wherein said representations that non-visually indicate which combination of one or more keys of said input device effect said instruction or data input are based on biological stimuli.
47. The method of claim 46, wherein said biological stimuli comprise one or more stimuli selected from the group of stimuli consisting of smell, temperature and electrical impulses.
48. The method of claim 39, comprising the further step of outputting a non-chordic representation of said identified instructions or data input.
49. A computer program product having a computer readable medium having a computer program recorded therein for processing keypress data, said computer program product comprising: computer program code for receiving instructions or data from an input device having a plurality of keys assigned only to a specific single digit of a human hand, the keys being physically arranged to match the relative sequential relation of the digits; computer program code for outputting a plurality of representations, wherein each representation represents an instruction or data input and is output in a manner to non- visually indicate which combination of one or more keys of said input device effect said instruction or data input, and thus which respective one or more keys are to be pressed; and computer program code for identifying instructions or data embedded in said keypress data; wherein at least one of said plurality of representations indicates in a non-visual manner that a simultaneous combination of digits effect the represented instruction or data input.
50. The computer program product of claim 49, wherein said representations that non- visually indicate which combination of one or more keys of said input device effect said instruction or data input comprise audible representations.
5 51. The computer program product of claim 50, wherein said audible representations comprise speech.
52. The computer program product of claim 50, wherein said audible representations comprise musical chords. 0
53. The computer program product of claim 52, wherein said musical chords comprise harmonious sounds based on pentatonic scales.
54. The computer program product of claim 52, wherein said musical chords comprise s dissonant sounds.
55. The computer program product of claim 49, wherein said representations that non- visually indicate which combination of one or more keys of said input device effect said instruction or data input comprise movement. 0
56. The computer program product of claim 49, wherein said representations that non- visually indicate which combination of one or more keys of said input device effect said instruction or data input are based on biological stimuli.
S 57. The computer program product of claim 56, wherein said biological stimuli comprise one or more stimuli selected from the group of stimuli consisting of smell, temperature and electrical impulses.
58. The computer program product of claim 49, further comprising computer program 0 code for outputting a non-chordic representation of said identified instructions or data input.
PCT/AU2004/000797 2003-06-18 2004-06-18 A chordic engine for data input WO2004111823A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2003903098 2003-06-18
AU2003903098A AU2003903098A0 (en) 2003-06-19 2003-06-19 A chordic engine for data input

Publications (1)

Publication Number Publication Date
WO2004111823A1 true WO2004111823A1 (en) 2004-12-23

Family

ID=31954137

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2004/000797 WO2004111823A1 (en) 2003-06-18 2004-06-18 A chordic engine for data input

Country Status (2)

Country Link
AU (1) AU2003903098A0 (en)
WO (1) WO2004111823A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080195976A1 (en) * 2007-02-14 2008-08-14 Cho Kyung-Suk Method of setting password and method of authenticating password in portable device having small number of operation buttons
WO2014043758A1 (en) * 2012-09-24 2014-03-27 Kordtech Pty Ltd Chordic control system, chordic device controller and chordic interface cable
CN103696683A (en) * 2013-12-12 2014-04-02 北京市三一重机有限公司 Power head cruising drilling method, power head cruising drilling system and rotary drilling machine
WO2018053599A1 (en) * 2016-09-25 2018-03-29 Kordtech Pty Ltd Human machine interface system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4323888A (en) * 1979-12-21 1982-04-06 Megadata Corporation Keyboard system with variable automatic repeat capability
EP0085645A2 (en) * 1982-02-02 1983-08-10 ERGOPLIC MAKASHOT (1981) Ltd. Keyboard apparatus
US4805222A (en) * 1985-12-23 1989-02-14 International Bioaccess Systems Corporation Method and apparatus for verifying an individual's identity
US5880418A (en) * 1997-11-20 1999-03-09 Livesay; L. D. Method and apparatus for a multi-function manual controller
US5900864A (en) * 1994-05-23 1999-05-04 Australian Institute Of Marine Science Human/machine interface for computing devices
US6442692B1 (en) * 1998-07-21 2002-08-27 Arkady G. Zilberman Security method and apparatus employing authentication by keystroke dynamics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4323888A (en) * 1979-12-21 1982-04-06 Megadata Corporation Keyboard system with variable automatic repeat capability
EP0085645A2 (en) * 1982-02-02 1983-08-10 ERGOPLIC MAKASHOT (1981) Ltd. Keyboard apparatus
US4805222A (en) * 1985-12-23 1989-02-14 International Bioaccess Systems Corporation Method and apparatus for verifying an individual's identity
US5900864A (en) * 1994-05-23 1999-05-04 Australian Institute Of Marine Science Human/machine interface for computing devices
US5880418A (en) * 1997-11-20 1999-03-09 Livesay; L. D. Method and apparatus for a multi-function manual controller
US6442692B1 (en) * 1998-07-21 2002-08-27 Arkady G. Zilberman Security method and apparatus employing authentication by keystroke dynamics

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080195976A1 (en) * 2007-02-14 2008-08-14 Cho Kyung-Suk Method of setting password and method of authenticating password in portable device having small number of operation buttons
WO2014043758A1 (en) * 2012-09-24 2014-03-27 Kordtech Pty Ltd Chordic control system, chordic device controller and chordic interface cable
CN103696683A (en) * 2013-12-12 2014-04-02 北京市三一重机有限公司 Power head cruising drilling method, power head cruising drilling system and rotary drilling machine
WO2018053599A1 (en) * 2016-09-25 2018-03-29 Kordtech Pty Ltd Human machine interface system
CN109791434A (en) * 2016-09-25 2019-05-21 科尔德私人有限公司 Human-computer interface system
US10976841B2 (en) 2016-09-25 2021-04-13 Kordtech Pty Ltd Human machine interface system
AU2017331809B2 (en) * 2016-09-25 2022-02-24 Kordtech Pty Ltd Human machine interface system
US11409380B2 (en) 2016-09-25 2022-08-09 Kordtech Pty Ltd Human machine interface system

Also Published As

Publication number Publication date
AU2003903098A0 (en) 2003-07-03

Similar Documents

Publication Publication Date Title
AU693553B2 (en) A human/machine interface
KR101717805B1 (en) Systems and methods for haptically-enhanced text interfaces
CN108874158B (en) Automatic adaptation of haptic effects
US8812972B2 (en) Dynamic generation of soft keyboards for mobile devices
US20030184452A1 (en) System, method, and computer program product for single-handed data entry
US6388657B1 (en) Virtual reality keyboard system and method
US6600480B2 (en) Virtual reality keyboard system and method
KR100714725B1 (en) Apparatus and method for protecting exposure of inputted information
KR20010093812A (en) Touch-typable devices based on ambiguous codes and methods to design such devices
US20040183783A1 (en) Method and apparatus for improved keyboard accessibility using vibrating keys
KR100579814B1 (en) Character Inputting System for Mobile Terminal And Mobile Terminal Using The Same
WO2004111823A1 (en) A chordic engine for data input
US20060139315A1 (en) Apparatus and method for inputting alphabet characters on keypad
KR20080070930A (en) Apparatus and method for inputing the korean alphabet in portable terminal
CN108052212A (en) A kind of method, terminal and computer-readable medium for inputting word
Sporka et al. Non-speech operated emulation of keyboard
EP0776500A1 (en) A human/machine interface for computing devices
CA2397567A1 (en) Apparatus and method for inputting alphabet characters on keypad
JP2012048417A (en) Input device
KR101988606B1 (en) Method for Mapping Alphabet and Hangul using Six Key
JP5461345B2 (en) Input device
Gupta et al. Svift: Swift vision-free text-entry for touch screens
KR20030042272A (en) korean language input system of wireless phone using a blind person
KR100433173B1 (en) Keypad telephone with Korean character and method for inputting Korean using the keypad
JP2006048492A (en) Password input device, password input method, and program thereof

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DPEN Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101)
122 Ep: pct application non-entry in european phase