WO2010017156A1 - Automatic performance optimization for perceptual devices - Google Patents

Automatic performance optimization for perceptual devices Download PDF

Info

Publication number
WO2010017156A1
WO2010017156A1 PCT/US2009/052633 US2009052633W WO2010017156A1 WO 2010017156 A1 WO2010017156 A1 WO 2010017156A1 US 2009052633 W US2009052633 W US 2009052633W WO 2010017156 A1 WO2010017156 A1 WO 2010017156A1
Authority
WO
WIPO (PCT)
Prior art keywords
perceptual
stimulus
user
parameter
signal
Prior art date
Application number
PCT/US2009/052633
Other languages
French (fr)
Inventor
Bonny Banerjee
Lee Krause
Mark D. Skowronski
Original Assignee
Audigence, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/185,394 external-priority patent/US8755533B2/en
Application filed by Audigence, Inc. filed Critical Audigence, Inc.
Priority to AU2009279764A priority Critical patent/AU2009279764A1/en
Priority to EP09791124A priority patent/EP2321981A1/en
Publication of WO2010017156A1 publication Critical patent/WO2010017156A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • A61B5/121Audiometering evaluating hearing capacity
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L2021/065Aids for the handicapped in understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0264Noise filtering characterised by the type of parameter measurement, e.g. correlation techniques, zero crossing techniques or predictive techniques

Definitions

  • This invention relates to systems and methods for optimizing performance of perceptual devices to adjust to a user's needs and, more particularly, to systems and methods for adjusting the parameters of digital hearing devices to customize the output from the hearing device to a user.
  • Perception is integral to intelligence. Perceptual ability is a prerequisite for any intelligent agent, living or artificial, to function satisfactorily in the real world. For an agent to experience an external environment with its perceptual organs (or sensors, in the case of artificial agents), it sometimes becomes necessary to augment the perceptual organs, the environment, or both.
  • human eyes are often augmented with a pair of prescription glasses.
  • the environment is augmented with devices, such as speakers and sub-woofers, placed in certain positions with respect to the agent.
  • devices such as speakers and sub-woofers
  • the agent often has to wear specially designed eyeglasses, such as polarized glasses.
  • perceptual devices include, without limitation, audio headphones, hearing aids, cochlear implants, low-light or "night-vision” goggles, tactile feedback devices, etc.
  • perceptual devices Due to personal preference, taste, and the raw perceptual ability of the organs, the quality of experience achieved by augmenting the agent's perceptual organs or environment with devices is often user-specific. As a result, the devices should be tuned to provide the optimum experience to each user. In the case of hearing devices (e.g., hearing aids, cochlear implants), such devices are endowed with parameters that tailor the device's performance to an individual's hearing needs.
  • Agents with simple perceptual systems e.g., robotic vacuum cleaners
  • agents with complex perceptual systems e.g., humans
  • a sophisticated perceptual device should also allow the user to tune the device to meet that user's particular perceptual needs.
  • Such complex devices often have a large set of parameters that can be tuned to a specific user's needs. Each parameter can be assigned one of many values, and determining the values of parameters for a particular user's optimum performance is difficult.
  • a user is required to be thoroughly tested with the device in order to be assigned the optimum parameter values. The number of tests required increases exponentially with the number of device parameters. Dedicating a significant amount of time to testing often is not a feasible option; accordingly, it is may be advantageous to reduce the complexity of the problem.
  • the invention relates to a method for modifying a controllable stimulus generated by a perceptual device in communication with a human user, the method including: generating an input signal to the perceptual device, the perceptual device sending a stimulus to the human user, the stimulus defined at least in part by a parameter, the parameter having a value; receiving an output signal from the human user, the output signal based at least in part on a perception of the stimulus by the human user; determining a difference between the input signal and the output signal; constructing a perceptual model based at least in part on the difference; and suggesting a value for the parameter based at least in part on the perceptual model.
  • suggesting a value further includes utilizing a knowledge base.
  • the knowledge base includes at least one of declarative knowledge and procedural knowledge.
  • the method further includes generating a second input signal to the perceptual device based at least in part on the perceptual model.
  • the input signal is an audio signal, and/or the perceptual device is a digital audio device.
  • the invention in another aspect, relates to a system for modifying a controllable stimulus generated by a perceptual device in communication with a human user, the system including: a test set generator for generating a test set to the perceptual device, the perceptual device sending a stimulus to the human user, the stimulus defined at least in part by a parameter, the parameter including a value; a signal receiver for receiving an output signal from the human user, the output signal based at least in part on a perception of the stimulus by the human user; a perceptual model module for constructing a perceptual model based at least in part on the difference; and a parameter generator for suggesting a value for the parameter based at least in part on the perceptual model.
  • the system further includes a second signal generator for generating a second input signal to the perceptual device based at least in part on the perceptual model.
  • the system further includes a storage module for storing information used in the construction of the perceptual model.
  • the information stored in the storage module includes a knowledge base.
  • the system includes a rule extraction module for formulating a rule based at least in part on the perceptual model.
  • the parameter generator suggests a value for the parameter based at least in part on at least one of information obtained from the storage module and information obtained from the perceptual model module.
  • the signal generator includes the second signal generator.
  • the input signal is an audio signal.
  • the invention relates to an article of manufacture having computer-readable portions embodied thereon for modifying a controllable stimulus generated by a perceptual device in communication with a user, the article including: computer readable instructions for providing an input signal to the perceptual device, the perceptual device sending a stimulus to the human user, the stimulus defined at least in part by a parameter, the parameter having a value; computer readable instructions for receiving an output signal from the agent, the output signal based at least in part on a perception of the stimulus by the human user; computer readable instructions for determining a difference between the input signal and the output signal; computer readable instructions for constructing a perceptual model based at least in part on the difference; and computer readable instructions for suggesting a value for the parameter based at least in part on the perceptual model.
  • the article of manufacture further includes computer readable instructions for providing a second input signal to the perceptual device based at least in part on the perceptual model.
  • the invention in another aspect, relates to a method of tuning a perceptual device from a speech waveform, the method including the steps of: inputting a speech waveform from a user response to a stimulus; extracting at least one first acoustic feature from the waveform; segmenting at least one phoneme from the at least one first acoustic feature; extracting at least one second acoustic feature from the at least one phoneme; comparing the speech waveform to a stimulus; and determining at least one parameter value for the perceptual device.
  • Embodiments of the above aspect include the steps of: transmitting a stimulus to a user; and receiving a user response based at least in part on the stimulus.
  • the at least one first acoustic feature is extracted utilizing a frame-based procedure.
  • the at least one second acoustic feature is extracted utilizing a segment-based procedure.
  • the method includes the step of determining an error that is a difference between the speech waveform and the stimulus. In still other embodiments, the error is equal to
  • W 1 is the weight of the / ⁇ feature
  • f s t and f r t are the / ⁇ features of the stimulus and response respectively
  • I I denotes a distance measure.
  • the distance measure is a Mahalanobis distance.
  • the invention in another aspect, relates to an article of manufacture having computer-readable program portions embedded thereon for tuning a perceptual device from a speech waveform, the program portions including: instructions for inputting a speech waveform from a user response to a stimulus; instructions for extracting at least one first acoustic feature from the waveform; instructions for segmenting at least one phoneme from the at least one first acoustic feature; instructions for extracting at least one second acoustic feature from the at least one phoneme; instructions for comparing the speech waveform to a stimulus; and instructions for determining at least one parameter value for the perceptual device.
  • the invention in another aspect, relates to a system for tuning a perceptual device from a speech waveform, the system including: a receiver for receiving a speech waveform from a user response to a stimulus; an first extractor for extracting at least one first acoustic feature from the waveform; a first processor for segmenting at least one phoneme from the at least one first acoustic feature; a second extractor for extracting at least one second acoustic feature from the at least one phoneme; a second processor for comparing the speech waveform to a stimulus; and a third processor for determining at least one parameter value for the perceptual device.
  • Embodiments of the above aspect include a transmitter for transmitting a stimulus to a user.
  • the system includes a system processor that includes the first extractor, the first processor, the second extractor, the second processor, the third processor, and the fourth processor.
  • the invention comprises an article of manufacture having a computer-readable medium with computer-readable instructions embodied thereon for performing the methods described in the preceding paragraphs.
  • the functionality of a method of the present invention may be embedded on a computer-readable medium, such as, but not limited to, a floppy disk, a hard disk, an optical disk, a magnetic tape, a PROM, an EPROM, CD-ROM, DVD-ROM or downloaded from a server.
  • the functionality of the techniques may be embedded on the computer-readable medium in any number of computer-readable instructions, or languages such as, for example, FORTRAN, PASCAL, C, C++, Java, PERL, LISP, JavaScript, C#, TcI, BASIC and assembly language.
  • the computer- readable instructions may, for example, be written in a script, macro, or functionally embedded in commercially available software (such as EXCEL or VISUAL BASIC).
  • FIG. 1 is a schematic diagram of a method for automatic hearing device parameter tuning from a speech waveform
  • FIG. 2 is a schematic diagram depicting the relationship between a perceptual device and an agent in accordance with one embodiment of the present invention
  • FIG. 3 is a schematic diagram of an apparatus in accordance with one embodiment of the present invention
  • FIG. 4 is the schematic diagram of FIG. 3 incorporating a knowledge base in accordance with one embodiment of the present invention
  • FIG. 5 is a flowchart of a testing procedure in accordance with one embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a testing system in accordance with one embodiment of the present invention. DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 depicts one embodiment of a method 10 for automatic tuning of hearing device parameters directly from acoustic features of speech.
  • the speech waveform is spoken by a hearing device user (i.e., a patient or agent) 12 after being presented with an oral speech stimulus in a stimulus/response test paradigm, such as that depicted in U.S. Patent No.
  • the speech waveform may include, generally, one or more multi-phoneme sounds spoken by a user. These multi-phoneme sounds may form parts, or entire portions of, words, phrases, sentences, or other constructs of spoken language, and may be in any language or in a plurality of languages.
  • the speech waveform is input into an acoustic feature extraction process 14.
  • the acoustic features are input into a segmentation routine 16 which delimits phoneme boundaries in the speech waveform. Segmentation may be performed using a hidden Markov model (HMM), described in Rabiner, L., "A tutorial on Hidden Markov Models and Selected Applications in Speech Recognition," Proc. IEEE, vol. 77, no. 2, pp. 257-286, Feb. 1989, the disclosure of which is hereby incorporated by reference herein in its entirety.
  • HMM hidden Markov model
  • ASR automatic speech recognition
  • the HMM may be trained as phoneme models, bi-phone models, N-phone models, syllable models or word models.
  • a Viterbi path of the speech waveform through the HMM may be used for segmentation, so the phonemic representation of each state in the HMM is required.
  • Phonemic representation of each state may utilize hand-labeling phoneme boundaries for the HMM training data.
  • Specific states are assigned to specific phonemes (more than one state may be used to represent each phoneme for all types of HMMs).
  • the frame-based acoustic feature extraction process may be a conventional ASR front end.
  • Human factor cepstral coefficients (HFCCs) a spectral flatness measure, a voice bar measure, and delta and delta- delta coefficients as acoustic features may be utilized.
  • HFCCs and spectral flatness measure are described in Skowronski, M. D. and J. G. Harris, "Exploiting independent filter bandwidth of human factor cepstral coefficients in automatic speech recognition," J. Acoustical Society of America, vol. 116, no. 3, pp. 1774-1780, Sept. 2004, and Skowronski, M. D. and J. G. Harris, "Applied principles of clear and Lombard speech for intelligibility enhancement in noisy environments," Speech Communication, vol. 48, no. 5, pp. 549-558, May 2006, the disclosures of which are hereby incorporated by reference herein in their entireties. Acoustic features may be measured for each analysis frame at predetermined durations.
  • Frame durations of 1 ms to about 50 ms, from about 10 ms to about 40 ms, and from about 15 ms to about 30 ms are acceptable. In certain embodiments, durations of about 20 ms are desirable. Uniform overlap between adjacent frames is also desirable. The overlap duration may be in the range of about 0 ms to a predetermined overlap duration. The predetermined overlap duration may be quantified as the frame duration minus ⁇ , where ⁇ is a small positive value greater than zero. A smaller ⁇ yields more overlap between frames, and more frames per second. As ⁇ goes to zero, frames per second goes to infinity. Overlap durations of about 10 ms between adjacent frames may be desirable in certain embodiments. Analysis frames and overlaps having other durations and times are also contemplated.
  • Segment-based acoustic features for each phoneme of the speech waveform are measured from segmented regions 18.
  • the features include HFCC calculated over a single window spanning the entire region of a phoneme (which may vary from about 5 ms to tens of seconds, depending on the agent's unconscious or purposeful exhalation length while forming a sound or sounds), a single voice bar measure, and/or a single spectral flatness measure, augmented with several other acoustic features.
  • Various other acoustic features may be appended to the set of segment-based features listed above that provide additional information targeting specific distinctive features of phonemes as described in Jakobson, R., C. G. M. Fant, and M. Halle, "Preliminaries to Speech Analysis: The Distinctive Features and Their
  • the difference between the two constitutes the error in perceiving that stimulus.
  • the distribution of acoustic features for each stimulus phoneme is then calculated.
  • This distribution may be represented by any distribution model, estimated from the same hand-labeled data used to train the segmentation HMM.
  • a Gaussian mixture model may be utilized to represent the stimulus features.
  • An error ⁇ is calculated as the mean of the weighted distance between the distribution of features for each stimulus and the features extracted from the corresponding response. That is,
  • the optimization process 20 information corresponding to a test stimulus and/or a previous device parameter set 22 are utilized to optimize or tune a perceptual device.
  • the optimization process 20 in turn, generates a new device parameter set 24 that is used to improve the performance of the perceptual device.
  • Various embodiments of the methods and systems disclosed herein are used to "tune" a perceptual device.
  • the term "optimization" is sometimes used to describe the process of tuning, which typically includes modifying parameters of a perceptual device.
  • Certain embodiments of the disclosed methods and systems automatically tune at least one device parameter based on a user's raw perceptual ability to improve the user's perception utilizing different tuning algorithms operating separately or in tandem to allow the device to be tuned quickly.
  • the device parameters can be user-specific or user-independent.
  • a model is created to describe a user' s perception (i.e., the perceptual model). This model is incremental and is specific to a user and his device.
  • one or more algorithms is applied to the model resulting in predictions (along with confidence and explanation) of the optimum parameter values for the user.
  • the user is iteratively tested with the values having the highest confidence, and the model is further updated.
  • a set of rules capturing user-independent information is used to tune certain parameters.
  • the number of parameters governing the operation of a given perceptual device may be large.
  • the amount of data required to faithfully model a user's perceptual strengths and weaknesses using that device increases exponentially with the number of device parameters; this limits the ability to reach optimal settings for the device in a reasonable time.
  • a number of algorithms are used with simple independent assumptions regarding the model. Using these assumptions, each algorithm studies the model and makes predictions with a confidence. The most confident prediction is chosen at any point of time. This architecture helps reduce the complexity of the solution that otherwise would have been enormous.
  • lookup tables or other procedures may be utilized to perform the optimization, in much the same way as the algorithms described above.
  • a user may be considered a black box with perceptual organs that can accept a signal as input and produce a signal as output in accordance with certain instructions.
  • This method is useful for applications where the black box is too complex to be modeled non- stochastically, such as the human brain.
  • the instructions can be conveyed by different means. For example, a human might be told instructions in a natural language; an artificial agent might be programmed with the instructions.
  • Raw perception of a user is judged by some criteria that measure the actual output signal against the output signal expected from the application of the given set of instructions to the input signal. For example, if the input signals are spoken phonemes, the black box is a human brain with ears as the perceptual organs, and the instruction is to reproduce the input phonemes (as speech or in writing), the perception might be measured by computing the difference between the input and output phonemes. In another example, if the input signal is a set of letters written on a piece of paper, the black box is a human brain with eyes as the perceptual organs, and the instruction is to reproduce the letters (as speech or in writing), the perception might be measured by computing the difference between the input and output letters.
  • FIG. 2 depicts an exemplary relationship between the perceptual device D and the agent A. Given a user or an agent A, one or more devices D, an input signal S lnp , and a
  • FIG. 1 corresponding output signal S out that the agent has produced obeying certain instructions
  • S lnt is the intermediate signal or stimulus emanated from the device(s) and perceived by
  • the agent In the case of a digital audio device, the stimulus is the sound actually heard by the user.
  • the intermediate signal cannot be measured in the same way that S lnp and S out are
  • the function A is characterized by the device parameters.
  • Embodiments of the present invention (1) statistically model the perceptual errors (i.e., some metric applied to S lnp ⁇ S out ) for an agent with respect to the device parameters, and (2) study this perceptual errors (i.e., some metric applied to S lnp ⁇ S out ) for an agent with respect to the device parameters, and (2) study this perceptual errors (i.e., some metric applied to S lnp ⁇ S out ) for an agent with respect to the device parameters, and (2) study this perceptual
  • a method for automatically tuning the parameters of at least one perceptual device in a user- specific way.
  • the agent or its environment is fitted with a device(s) whose parameters are preset, for example, to factory default values.
  • the proposed method may be implemented as a computer program that tests the raw perception of the agent.
  • FIG. 3 depicts one such implementation of the program 100. Based on the results of the test, the program 100 may suggest new parameter values along with an explanation of why such values are chosen and the confidence of the suggested set of values 102.
  • the devices 104 are reset with the parameter values with the highest confidence or best explanation.
  • a human tester for example, an audiologist fine tuning a digital hearing aid or cochlear implant (CI)
  • CI cochlear implant
  • the purpose of testing is to determine the raw perceptual ability, independent of context and background knowledge, of the agent 108.
  • a series of input signals is presented to the agent 108 whose environment is fitted with at least one perceptual device 104 set to certain parameter values.
  • the input signal may be of sufficient length, duration, complexity, etc. to exceed a single phoneme sound. Such a complex signal will ultimately elicit a responsive speech waveform from the agent.
  • the agent 108 After each signal is presented, the agent 108 is given enough time to output a signal in response to its perceived signal, in accordance with instructions that the agent 108 has previously received.
  • the output signal 110 corresponding to each input signal is recorded along with the time required for response.
  • an ASR 150 may be used to process the speech waveform.
  • the speech waveform is input into an acoustic feature extractor 152.
  • the acoustic features are input into a processor for segmentation 154 which delimits phoneme boundaries in the speech waveform.
  • a second extractor 156 then measures segment-based acoustic features for each phoneme of the o speech waveform. The resulting features are then used to in the following processes to compare the stimulus to the speech waveform, to optimize the perceptual device.
  • a metric captures the difference between the input signal and the agent's response in a meaningful way such that a model 112 of the agent's perceptual ability can be incrementally constructed using that metric and the device parameters. 5 [0035]
  • the test set creator or generator 114 modifies the parameters based on information received during the test.
  • the next set of input signals are chosen on which the agent 108 should be tested, based on its strengths and weaknesses as evident from the model 112.
  • a new test starts with the perceptual devices 104 set to new parameter values, again,0 based on the application of the algorithm to the information.
  • An increase in response time indicates that either the agent 108 is having difficulty in perception or the agent 108 is getting fatigued. In the latter case, the agent 108, tester, or program 100 may opt to rest before further testing.
  • the model 112 describes the perceptual ability of the agent 108 with respect to the perceptual devices 104. Given an accurate model, one can predict the parameter values best suited for an agent 108. However, the model 112 is never complete until the agent 108 has been tested with all combinations of values for the parameters.
  • FIG. 4 presents another embodiment of the present invention incorporating a knowledge base into the computer program 100 of FIG. 3.
  • the knowledge base (KB) of the computer program 100 stores knowledge in two forms - declarative 120 and procedural 122.
  • Declarative knowledge 120 is stored as a set of statements useful for predicting a new set of parameter values 132 based on the model of the agent's perceptual ability.
  • declarative knowledge would include a situation where the agent 108 is a human with hearing loss, the device 104 is a CI, and his model 112 shows that he is weak in hearing the middle range of the frequency spectrum.
  • the declarative knowledge 120 would include a statement that more CI channels should be associated with frequencies in that middle range than the higher or lower frequency ranges.
  • Declarative knowledge can be readily applied, wherever appropriate, to make an inference. Often a user's previously tested parameters and device parameters 134 may be utilized with the declarative knowledge.
  • Procedural knowledge 122 is stored as procedures or algorithms that study the perceptual model 112 in order to make predictions for new parameter values.
  • Each item of procedural knowledge is an independent algorithm 124 that studies the model 112 in a way which might involve certain assumptions about the model 112. These items of procedural knowledge may also utilize declarative knowledge 120 to study the model 112. Upon studying a model 112 and comparing it with the stored models of previously tested similar agents using similar devices, the algorithms may derive new rules 126 for storage as items of declarative knowledge 128.
  • An example of procedural knowledge would include a situation where the agent is a human with hearing loss and the device is a CI. In this case, his model might be studied by an algorithm assuming that there exists a region in the model that represents the perceptual error minima of the agent. Hence, the algorithm will study the model hoping to find that minimum region and will predict appropriate parameter values for that minimum.
  • the number of adjustable parameters can be large.
  • the number of tests required to tune these parameters may even increase exponentially with the number of device parameters.
  • One of the challenges faced by the proposed method is to reduce the number of tests so that the time required for tuning the parameter values can be reduced to a practical time period.
  • One way to make the process more efficient is to utilize procedural knowledge 122.
  • a number of procedures, lookup tables, or algorithms 124 with very different assumptions are contemporaneously applied to the model 112. After application, each procedure provides its prediction of the parameters along with a confidence value for the prediction and an explanation of how the prediction was reached. These explanations are evaluated, either by a supervisory program or a tester, and that prediction that provides the best explanation is selected 130.
  • FIG. 5 depicts an exemplary testing procedure 200 in accordance with one embodiment of the present invention.
  • a user fitted with a CI is tested in the presence of an audiologist, who is monitoring the test.
  • the program begins by generating an input signal 202.
  • This input signal directs the CI to deliver a stimulus (e.g., a phoneme sound) to the user.
  • a stimulus e.g., a phoneme sound
  • the stimulus parameter value is accessed 204 by the program. This value may be either a factory default setting (usually when the device is first implanted), a previously stored suggested value, or a previously stored override value. The latter two values are described in more detail below.
  • a stimulus based on the parameter is then delivered to the user 206.
  • the program waits for an output signal from the user 208.
  • This received output signal may take any form that is usable by the program. For example, the user may repeat the sound into a microphone, spell the sound in a keyboard, or press a button or select an icon that corresponds to their perception of the sound.
  • the program notes the time T when the output signal is received.
  • the elapsed time is compared to a predetermined value 210. If the time exceeds this value, the program determines that the user is fatigued 212, and the program ends 214. If the elapsed time does not exceed the threshold, however, the output signal and stimulus are compared 216 to begin analysis of the results. The difference between the output signal from the user and the stimulus sent from the CI to the user are used to construct the perceptual model 218. Next, the program suggests a value for the next parameter to be tested 220.
  • the audiologist may optionally decide whether or not to utilize the suggested value 222 for the next test procedure, based on his or her knowledge base or other factors that may not be considered by the program. If the audiologist overrides the suggested value with a different value, this override value is stored 224 to be used for the next test. The program then determines if the test is complete 226, and may terminate the test 228 if required or desired by the user. [0044] The test may be determined to be complete for a number of reasons. For example, the user or audiologist may be given the option at this point (or at any point during the test) to terminate testing.
  • the program may determine that during one or more iterations of the test, the user's response time, as measured in step 210, increased such that fatigue may be a factor, warranting termination of the testing. Additionally, the program may determine that, based on information regarding the tested device or the program itself, all iterations or options have been tested. In such a case, the program may determine that no further parameter adjustment would materially improve the operation of the device or the program. Also, the program may interpret inconsistent information at this point as indicative of an error condition that requires termination. Other procedures for terminating testing are known to the art. [0045] Returning to step 222, if the suggested value is accepted, this value is then stored for later use in a subsequent test 230.
  • the program may be operated without the assistance of an audiologist. In this case, acceptance of the suggested value would be the default response to the suggested value. In this way, the test may be utilized without the involvement of an audiologist.
  • the program with few modifications, could allow the user to self-tune his device remotely, potentially over an internet connection or with a stand-alone tuning device.
  • a determination to continue the test 232 (having similar considerations as described in step 226), may be made prior to ending the test 234.
  • the optimization methods of the current invention may be utilized with virtually any metric that may be used to test people that utilize digital hearing devices.
  • One such metric is disclosed in, for example, U.S. Patent No. 7,206,416 to Krause et al., the entire disclosure of which is hereby incorporated by reference herein in its entirety, and will be discussed herein as 5 one exemplary application of the optimization methods.
  • a typical testing system 300 is depicted in FIG. 6.
  • the testing procedure tests the raw hearing ability, independent of context and background knowledge, of a hearing-impaired person.
  • an input signal 302 is generated and sent to a digital audio device, which, in this example, is a CI 304.
  • the CI will deliver an o intermediate signal or stimulus 306, associated with one or more parameters, to a user 308.
  • the parameters may be factory-default settings.
  • the parameters may be otherwise defined, as described below. In either case, the test procedure utilizes the stored parameter values to define the stimulus (i.e., the sound).
  • the user After a signal is presented, the user is given enough time to make a sound signal (or speak a string of sounds sufficient to form a speech waveform) representing what he heard.
  • the output signal corresponding to each input signal is recorded along with the response time. If the response time exceeds a predetermined setting, the system determines that the person may be getting fatigued and will stop the test.
  • the output signal 310 may be a sound repeated0 by the user 308 into a microphone 312.
  • the resulting analog signal 314 is converted by an analog/digital converter 316 into a digital signal 318 delivered to the processor 320.
  • the user 308 may type a textual representation of the sound heard into a keyboard 322.
  • the output signal 310 is stored and compared to the immediately preceding stimulus.
  • an algorithm, lookup table, or other procedure decides the user's strengths and weaknesses and stores this information in an internal perceptual model. Additionally, the algorithm suggests a value for the next test parameter, effectively choosing the next input sound signal to be presented. This new value is delivered via the output module 324. If an audiologist is administering the test, the audiologist may choose to ignore the suggested value, in favor of their own suggested value. In such a case, the tester's value would be entered into the override module 326.
  • the suggested value or the tester's override value is utilized, this value is stored in a memory for later use (likely in the next test). These tests may be repeated with different sounds, words, sentences, or other stimuli until the CI performance is optimized or otherwise modified, the user fatigues, etc. In one embodiment, the test terminates when the user's strengths and weaknesses with respect to the current CI device parameters are comprehensively determined. A new test starts with the CI device set to new parameter values. [0050]
  • the disclosed system utilizes any number of algorithms that may operate substantially or completely in parallel to suggest parameter values in real time.
  • Exemplary algorithms include (1) computing a reduced set of phonemes (input sound signals) for testing a person based on his strengths and weaknesses from past tests and using the features of the phonemes, thereby reducing testing time considerably; (2) computing a measure of performance for a person from his tests involving features of phonemes and their weights; (3) classifying a person based on their strengths and weaknesses as obtained from previous tests; and (4) predicting the parameter setting of a CI device to achieve optimum hearing for a person using his perceptual model and similar people's optimal device settings.
  • predetermined parameter values may be selected from a lookup table containing parameter value combinations based on a person's known or predicted strengths and weaknesses based on results from tests.
  • a phoneme In human language, a phoneme is the smallest unit of distinguishable speech. Phonemes may be utilized in testing. For example, the input signal may be chosen from a set of phonemes from the Iowa Medial Consonant Recognition Test. Both consonant phonemes and vowel phonemes may be used during testing, though vowel phonemes may have certain disadvantages in testing: they are too easy to perceive and typically do not reveal much about the nature of hearing loss. It is known that each phoneme is characterized by the presence, absence or irrelevance of a set of nine features - Vocalic, Consonantal, Compact, Grave, Flat, Nasal, Tense, Continuant, and Strident.
  • a person' s performance in a test can be measured by the number of input sound signals (i.e., phonemes, although actual words, phrases, sentences, or other language constructs in any language may also be used) he fails to perceive.
  • This type of basic testing may fail to capture the person's strengths and weaknesses because many phonemes share similar features. For example, the phonemes ' ⁇ f and ' ⁇ p' differ only in one out of the nine features called Continuant.
  • a person's performance in a test is measured by the weighted mean of the feature errors, given by: ⁇ W ⁇
  • W 1 is the weight and n t is the number of errors in the ith feature of the hierarchy.
  • weights of the features are experimentally ascertained to be ⁇ 0.151785714, 0.151785714, 0.142857143, 0.098214286, 0, 0.142857143, 0.125, 0.125, 0.0625 ⁇ .
  • Other weights may be utilized as the testing procedures evolve for a given user or group of users.
  • the actual weight utilized in experimentation to optimize may include other values and potentially may be dependent upon testing, the language being used, and other variables. Acceptable results may o be obtained utilizing other weightings.
  • This manner of testing provides a weighted error representing the user's performance with a set of parameter values. If a person is tested with all possible combinations of parameter values, the result can be represented as a weighted error surface in a high-dimensional space, where the dimension is one more than the number of parameters being 5 considered. In this error surface, there exists a global minimum and one or more local minima. In general, while the person's performance is good at each of these local minima, his performance is the best at the global minimum.
  • One task of the computer program is to predict the location of the global minimum or at least a good local minimum within a short period of testing.
  • the perceptual model may be represented in a number of ways, such as using a surface model, a set of rules, a set of mathematical/logical equations and inequalities, and so on, to obtain results.
  • a surface model due to the presence of many parameters, a very high-dimensional error surface may be formed.
  • the minimum amount of data required to model such a surface increases exponentially with the number of dimensions leading to the so-called “curse of dimensionality.” There is therefore an advantage to reducing the number of parameters.
  • the large number of parameters are reduced to three - “stimulation rate,” "Q-value,” and “map number.” The stimulation rate and Q-value can dramatically change a person's hearing ability.
  • the map number is an integer that labels the map and includes virtually all device parameters along with a frequency allocation table. Changing any parameter value or frequency allocation to the different channels would constitute a new map with a new map number.
  • the error surface is reduced to a four-dimensional space, thereby considerably reducing the minimum amount of data required to model the surface.
  • Each set of three parameter values constitutes a point. Only points at which a person has been tested, called sampled points, have a corresponding weighted error.
  • the error surface is constituted of sampled points. Adjusting parameters to reduce errors in one feature may lead to an increase in error in another feature. In order to adjust parameters such that the overall performance is enhanced, one should strive to reduce the total weighted error as described by equation (i) .
  • the software may be configured to run on any computer or workstation such as a PC or PC-compatible machine, an Apple Macintosh, a Sun workstation, etc.
  • any device can be used as long as it is able to perform all of the functions and capabilities described herein.
  • the particular type of computer or workstation is not central to the invention, nor is the configuration, location, or design of the database, which may be flat-file, relational, or object-oriented, and may include one or more physical and/or logical components.
  • the servers may include a network interface continuously connected to the network, and thus support numerous geographically dispersed users and applications.
  • the network interface and the other internal components of the servers intercommunicate over a main bi-directional bus.
  • the main sequence of instructions effectuating the functions of the invention and facilitating interaction among clients, servers and a network can reside on a mass-storage device (such as a hard disk or optical storage unit) as well as in a main system memory during operation. Execution of these instructions and effectuation of the functions of the invention is accomplished by a central-processing unit (“CPU").
  • CPU central-processing unit
  • a group of functional modules that control the operation of the CPU and effectuate the operations of the invention as described above can be located in system memory (on the server or on a separate machine, as desired).
  • An operating system directs the execution of low- level, basic system functions such as memory allocation, file management, and operation of mass storage devices.
  • a control block implemented as a series of stored instructions, responds to client-originated access requests by retrieving the user-specific profile and applying the one or more rules as described above.

Abstract

Systems and methods may be used to modify a controllable stimulus generated by a digital audio device in communication with a human user. An input signal is provided to the digital audio device. In turn, the digital audio device sends a stimulus based on that input signal to the human user, who takes an action, usually in the form of an output signal, to characterize the stimulus that the user receives, based on the user's perception. An algorithm, lookup table, or other procedure then determines a difference between the input signal and the output signal, and a perceptual model is constructed based at least in part on the difference. Thereafter, a new value for the parameter of the digital audio device is suggested based at least in part on the perceptual model. This process continues iteratively until the user's optimal device parameters are determined.

Description

AUTOMATIC PERFORMANCE OPTIMIZATION FOR PERCEPTUAL DEVICES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is related to U.S. Patent Application Serial No. 12/185,394, filed August 4, 2008, entitled "Automatic Performance Optimization for Perceptual Devices," and to U.S. Provisional Patent Application No. 61/164,453, filed March 29, 2009, the disclosures of which are hereby incorporated by reference herein in their entireties.
FIELD OF THE INVENTION [0002] This invention relates to systems and methods for optimizing performance of perceptual devices to adjust to a user's needs and, more particularly, to systems and methods for adjusting the parameters of digital hearing devices to customize the output from the hearing device to a user.
BACKGROUND OF THE INVENTION
[0003] Perception is integral to intelligence. Perceptual ability is a prerequisite for any intelligent agent, living or artificial, to function satisfactorily in the real world. For an agent to experience an external environment with its perceptual organs (or sensors, in the case of artificial agents), it sometimes becomes necessary to augment the perceptual organs, the environment, or both.
[0004] For example, human eyes are often augmented with a pair of prescription glasses. In another example, to experience surround-sound in a car or in a home theater, the environment is augmented with devices, such as speakers and sub-woofers, placed in certain positions with respect to the agent. To experience a 3D movie, the agent often has to wear specially designed eyeglasses, such as polarized glasses. These and other devices including, without limitation, audio headphones, hearing aids, cochlear implants, low-light or "night-vision" goggles, tactile feedback devices, etc., may be referred to generally as "perceptual devices." [0005] Due to personal preference, taste, and the raw perceptual ability of the organs, the quality of experience achieved by augmenting the agent's perceptual organs or environment with devices is often user-specific. As a result, the devices should be tuned to provide the optimum experience to each user. In the case of hearing devices (e.g., hearing aids, cochlear implants), such devices are endowed with parameters that tailor the device's performance to an individual's hearing needs. Different devices from different manufacturers contain unique sets of parameters, and tuning the set of parameters for an individual user has traditionally been performed by hand. The methods employed to hand-tune device parameters achieve sub- optimal levels of performance (namely, speech intelligibility) because of several factors: design of the battery of acoustic tests, subjectivity of the human tuner, and limited tuning time due to patient fatigue.
[0006] With the advent of sophisticated perceptual devices, each having a large number of degrees of freedom, it has become difficult to tune such devices to the satisfaction of each user. Many devices are left to the user for ad-hoc self-tuning, while many others are never tuned because the time and cost required to tune a device for a user may be too high. For example, cochlear implant devices, often used by people having severe hearing-impairment, are virtually never tuned by an audiologist to a particular user, but instead are left with the factory default settings to which the user's brain must attempt to adjust. Thus, a hearing-impaired person may never get the full benefit of his cochlear implant. [0007] Agents with simple perceptual systems (e.g., robotic vacuum cleaners) have sufficient transparency to allow for the tracking of their raw perceptual abilities, while agents with complex perceptual systems (e.g., humans) lack that transparency. Hence, it is extremely difficult to tune devices to the satisfaction of members of the latter class of users, because of the complexity of the devices that enhance an already complex perceptual system.
[0008] A sophisticated perceptual device should also allow the user to tune the device to meet that user's particular perceptual needs. Such complex devices often have a large set of parameters that can be tuned to a specific user's needs. Each parameter can be assigned one of many values, and determining the values of parameters for a particular user's optimum performance is difficult. A user is required to be thoroughly tested with the device in order to be assigned the optimum parameter values. The number of tests required increases exponentially with the number of device parameters. Dedicating a significant amount of time to testing often is not a feasible option; accordingly, it is may be advantageous to reduce the complexity of the problem. [0009] Therefore, there is a need to automatically tune perceptual devices in a user- specific way, as well as in a way that decreases test time and increases accuracy to achieve optimal or near-optimal levels of performance. As of today, living agents, especially humans, have complex perceptual systems that can take advantage of a user- specific tuning method. Artificial agents with complex perceptual systems, when developed, will also benefit from the user-specific tuning method. SUMMARY OF THE INVENTION
[0010] In one aspect, the invention relates to a method for modifying a controllable stimulus generated by a perceptual device in communication with a human user, the method including: generating an input signal to the perceptual device, the perceptual device sending a stimulus to the human user, the stimulus defined at least in part by a parameter, the parameter having a value; receiving an output signal from the human user, the output signal based at least in part on a perception of the stimulus by the human user; determining a difference between the input signal and the output signal; constructing a perceptual model based at least in part on the difference; and suggesting a value for the parameter based at least in part on the perceptual model. In one embodiment, suggesting a value further includes utilizing a knowledge base. In another embodiment, the knowledge base includes at least one of declarative knowledge and procedural knowledge. In yet another embodiment, the method further includes generating a second input signal to the perceptual device based at least in part on the perceptual model. In other embodiments, the input signal is an audio signal, and/or the perceptual device is a digital audio device.
[0011] In another aspect, the invention relates to a system for modifying a controllable stimulus generated by a perceptual device in communication with a human user, the system including: a test set generator for generating a test set to the perceptual device, the perceptual device sending a stimulus to the human user, the stimulus defined at least in part by a parameter, the parameter including a value; a signal receiver for receiving an output signal from the human user, the output signal based at least in part on a perception of the stimulus by the human user; a perceptual model module for constructing a perceptual model based at least in part on the difference; and a parameter generator for suggesting a value for the parameter based at least in part on the perceptual model. In an embodiment of the above aspect, the system further includes a second signal generator for generating a second input signal to the perceptual device based at least in part on the perceptual model. In another embodiment, the system further includes a storage module for storing information used in the construction of the perceptual model. In yet another embodiment, the information stored in the storage module includes a knowledge base. In still another embodiment, the system includes a rule extraction module for formulating a rule based at least in part on the perceptual model. In another embodiment of the above aspect, the parameter generator suggests a value for the parameter based at least in part on at least one of information obtained from the storage module and information obtained from the perceptual model module. In another embodiment, the signal generator includes the second signal generator. In yet another embodiment the input signal is an audio signal.
[0012] In another aspect, the invention relates to an article of manufacture having computer-readable portions embodied thereon for modifying a controllable stimulus generated by a perceptual device in communication with a user, the article including: computer readable instructions for providing an input signal to the perceptual device, the perceptual device sending a stimulus to the human user, the stimulus defined at least in part by a parameter, the parameter having a value; computer readable instructions for receiving an output signal from the agent, the output signal based at least in part on a perception of the stimulus by the human user; computer readable instructions for determining a difference between the input signal and the output signal; computer readable instructions for constructing a perceptual model based at least in part on the difference; and computer readable instructions for suggesting a value for the parameter based at least in part on the perceptual model. In an embodiment of the above aspect, the article of manufacture further includes computer readable instructions for providing a second input signal to the perceptual device based at least in part on the perceptual model. In another embodiment, the input signal is an audio signal.
[0013] In another aspect, the invention relates to a method of tuning a perceptual device from a speech waveform, the method including the steps of: inputting a speech waveform from a user response to a stimulus; extracting at least one first acoustic feature from the waveform; segmenting at least one phoneme from the at least one first acoustic feature; extracting at least one second acoustic feature from the at least one phoneme; comparing the speech waveform to a stimulus; and determining at least one parameter value for the perceptual device. Embodiments of the above aspect include the steps of: transmitting a stimulus to a user; and receiving a user response based at least in part on the stimulus. In other embodiments, the at least one first acoustic feature is extracted utilizing a frame-based procedure. In other embodiments, the at least one second acoustic feature is extracted utilizing a segment-based procedure. In yet other embodiments, the method includes the step of determining an error that is a difference between the speech waveform and the stimulus. In still other embodiments, the error is equal to
∑^ χ|/s,, - /r,,
where W1 is the weight of the /ώ feature, fs t and fr t are the /ώ features of the stimulus and response respectively, and I I denotes a distance measure. In other embodiments, the distance measure is a Mahalanobis distance. [0014] In another aspect, the invention relates to an article of manufacture having computer-readable program portions embedded thereon for tuning a perceptual device from a speech waveform, the program portions including: instructions for inputting a speech waveform from a user response to a stimulus; instructions for extracting at least one first acoustic feature from the waveform; instructions for segmenting at least one phoneme from the at least one first acoustic feature; instructions for extracting at least one second acoustic feature from the at least one phoneme; instructions for comparing the speech waveform to a stimulus; and instructions for determining at least one parameter value for the perceptual device. [0015] In another aspect, the invention relates to a system for tuning a perceptual device from a speech waveform, the system including: a receiver for receiving a speech waveform from a user response to a stimulus; an first extractor for extracting at least one first acoustic feature from the waveform; a first processor for segmenting at least one phoneme from the at least one first acoustic feature; a second extractor for extracting at least one second acoustic feature from the at least one phoneme; a second processor for comparing the speech waveform to a stimulus; and a third processor for determining at least one parameter value for the perceptual device. Embodiments of the above aspect include a transmitter for transmitting a stimulus to a user. Other embodiments include a fourth processor for determining an error that is a difference between the speech waveform and the stimulus. In still other embodiments, the system includes a system processor that includes the first extractor, the first processor, the second extractor, the second processor, the third processor, and the fourth processor. [0016] In another aspect, the invention comprises an article of manufacture having a computer-readable medium with computer-readable instructions embodied thereon for performing the methods described in the preceding paragraphs. In particular, the functionality of a method of the present invention may be embedded on a computer-readable medium, such as, but not limited to, a floppy disk, a hard disk, an optical disk, a magnetic tape, a PROM, an EPROM, CD-ROM, DVD-ROM or downloaded from a server. The functionality of the techniques may be embedded on the computer-readable medium in any number of computer- readable instructions, or languages such as, for example, FORTRAN, PASCAL, C, C++, Java, PERL, LISP, JavaScript, C#, TcI, BASIC and assembly language. Further, the computer- readable instructions may, for example, be written in a script, macro, or functionally embedded in commercially available software (such as EXCEL or VISUAL BASIC).
BRIEF DESCRIPTION OF THE DRAWINGS [0017] Other features and advantages of the present invention, as well as the invention itself, can be more fully understood from the following description of the various embodiments, when read together with the accompanying drawings, in which:
• FIG. 1 is a schematic diagram of a method for automatic hearing device parameter tuning from a speech waveform; • FIG. 2 is a schematic diagram depicting the relationship between a perceptual device and an agent in accordance with one embodiment of the present invention;
• FIG. 3 is a schematic diagram of an apparatus in accordance with one embodiment of the present invention; • FIG. 4 is the schematic diagram of FIG. 3 incorporating a knowledge base in accordance with one embodiment of the present invention;
• FIG. 5 is a flowchart of a testing procedure in accordance with one embodiment of the present invention; and
• FIG. 6 is a schematic diagram of a testing system in accordance with one embodiment of the present invention. DETAILED DESCRIPTION OF THE INVENTION
[0018] FIG. 1 depicts one embodiment of a method 10 for automatic tuning of hearing device parameters directly from acoustic features of speech. The speech waveform is spoken by a hearing device user (i.e., a patient or agent) 12 after being presented with an oral speech stimulus in a stimulus/response test paradigm, such as that depicted in U.S. Patent No.
7,206,416, entitled "Speech-based Optimization of Digital Hearing Devices," the disclosure of which is hereby incorporated by reference herein in its entirety. The speech waveform may include, generally, one or more multi-phoneme sounds spoken by a user. These multi-phoneme sounds may form parts, or entire portions of, words, phrases, sentences, or other constructs of spoken language, and may be in any language or in a plurality of languages.
[0019] The speech waveform is input into an acoustic feature extraction process 14. The acoustic features are input into a segmentation routine 16 which delimits phoneme boundaries in the speech waveform. Segmentation may be performed using a hidden Markov model (HMM), described in Rabiner, L., "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition," Proc. IEEE, vol. 77, no. 2, pp. 257-286, Feb. 1989, the disclosure of which is hereby incorporated by reference herein in its entirety. In other embodiments, any automatic speech recognition (ASR) engine may be employed. The HMM may be trained as phoneme models, bi-phone models, N-phone models, syllable models or word models. A Viterbi path of the speech waveform through the HMM may be used for segmentation, so the phonemic representation of each state in the HMM is required. Phonemic representation of each state may utilize hand-labeling phoneme boundaries for the HMM training data. Specific states are assigned to specific phonemes (more than one state may be used to represent each phoneme for all types of HMMs). [0020] Because segmentation is performed using an ASR engine, the frame-based acoustic feature extraction process may be a conventional ASR front end. Human factor cepstral coefficients (HFCCs), a spectral flatness measure, a voice bar measure, and delta and delta- delta coefficients as acoustic features may be utilized. HFCCs and spectral flatness measure are described in Skowronski, M. D. and J. G. Harris, "Exploiting independent filter bandwidth of human factor cepstral coefficients in automatic speech recognition," J. Acoustical Society of America, vol. 116, no. 3, pp. 1774-1780, Sept. 2004, and Skowronski, M. D. and J. G. Harris, "Applied principles of clear and Lombard speech for intelligibility enhancement in noisy environments," Speech Communication, vol. 48, no. 5, pp. 549-558, May 2006, the disclosures of which are hereby incorporated by reference herein in their entireties. Acoustic features may be measured for each analysis frame at predetermined durations. Frame durations of 1 ms to about 50 ms, from about 10 ms to about 40 ms, and from about 15 ms to about 30 ms are acceptable. In certain embodiments, durations of about 20 ms are desirable. Uniform overlap between adjacent frames is also desirable. The overlap duration may be in the range of about 0 ms to a predetermined overlap duration. The predetermined overlap duration may be quantified as the frame duration minus Δ, where Δ is a small positive value greater than zero. A smaller Δ yields more overlap between frames, and more frames per second. As Δ goes to zero, frames per second goes to infinity. Overlap durations of about 10 ms between adjacent frames may be desirable in certain embodiments. Analysis frames and overlaps having other durations and times are also contemplated.
[0021] Segment-based acoustic features for each phoneme of the speech waveform are measured from segmented regions 18. The features include HFCC calculated over a single window spanning the entire region of a phoneme (which may vary from about 5 ms to tens of seconds, depending on the agent's unconscious or purposeful exhalation length while forming a sound or sounds), a single voice bar measure, and/or a single spectral flatness measure, augmented with several other acoustic features. Various other acoustic features may be appended to the set of segment-based features listed above that provide additional information targeting specific distinctive features of phonemes as described in Jakobson, R., C. G. M. Fant, and M. Halle, "Preliminaries to Speech Analysis: The Distinctive Features and Their
Correlates," MIT Press, Cambridge, MA, 1961, the disclosure of which is hereby incorporated by reference herein in its entirety. These include, but are not limited to, main-lobe width of an autocorrelation function of the acoustic waveform in the segmented region, ratio of low- frequency to high-frequency energy, ratio of energy at the beginning and end of the segment, ratio of maximum to minimum spectral density (calculated variously by direct spectral measurement or from any spectral envelope estimate such as that from linear prediction), the spectral second moment, plosive burst duration, ratio of plosive burst energy to overall phoneme energy, and formant frequency and bandwidth estimates. [0022] Each stimulus presented to the patient will require a response. The difference between the two constitutes the error in perceiving that stimulus. The distribution of acoustic features for each stimulus phoneme is then calculated. This distribution may be represented by any distribution model, estimated from the same hand-labeled data used to train the segmentation HMM. For example, a Gaussian mixture model (GMM) may be utilized to represent the stimulus features. An error ξ is calculated as the mean of the weighted distance between the distribution of features for each stimulus and the features extracted from the corresponding response. That is,
∑^ χ|/s,, - /r>, ξ(s ~ r) = -
(i) where W1 is the weight of the /ώ feature, fs t and fr t are the /ώ features of the stimulus and response respectively, and I I denotes a distance measure, such as a Mahalanobis distance. The sum of errors over all stimuli in a test constitutes the total weighted error. [0023] The total weighted error for each set of device parameter values constitutes a point in the perceptual space of a patient which is formed incrementally as more tests are performed. A set of algorithms study this space utilizing any number of methods and suggest new parameter values for testing of the patient. The process continues until the optimal, or near- optimal, set of parameter values is obtained. One embodiment of an optimization process 20 is described in detail below. In brief, in certain embodiments of the optimization process 20, information corresponding to a test stimulus and/or a previous device parameter set 22 are utilized to optimize or tune a perceptual device. The optimization process 20, in turn, generates a new device parameter set 24 that is used to improve the performance of the perceptual device. [0024] Various embodiments of the methods and systems disclosed herein are used to "tune" a perceptual device. In this application, the term "optimization" is sometimes used to describe the process of tuning, which typically includes modifying parameters of a perceptual device. However, one of ordinary skill in the art would understand that the disclosed methods and systems may be used to "modify" the parameters of a device without achieving "optimization." That is, there may be instances where limitations of a device, or of user perception, may prevent complete optimization of a parameter, where "optimization" could be characterized as obtaining perfect or near-perfect results.
[0025] Another consideration is that the testing associated with the tuning process may stop short when the tester becomes tired or otherwise stops the test, without completely "optimizing" the device. True "optimization" may not be necessary or desirable, as even seemingly minor improvements or modifications to a device parameter may produce significant positive results for a device user. Accordingly, the terms "optimization," "modification," "tuning," "adjusting," and like terms are used herein interchangeably and without restriction to describe systems and methods that are used to modify parameters of a perceptual device, notwithstanding whether the output from the device is ultimately "optimized" or "perfected," as those terms are typically understood.
[0026] Certain embodiments of the disclosed methods and systems automatically tune at least one device parameter based on a user's raw perceptual ability to improve the user's perception utilizing different tuning algorithms operating separately or in tandem to allow the device to be tuned quickly. The device parameters can be user-specific or user-independent. In one embodiment of the optimization method, a model is created to describe a user' s perception (i.e., the perceptual model). This model is incremental and is specific to a user and his device. Next, one or more algorithms is applied to the model resulting in predictions (along with confidence and explanation) of the optimum parameter values for the user. Then, the user is iteratively tested with the values having the highest confidence, and the model is further updated. Last, a set of rules capturing user-independent information is used to tune certain parameters.
[0027] The number of parameters governing the operation of a given perceptual device may be large. The amount of data required to faithfully model a user's perceptual strengths and weaknesses using that device increases exponentially with the number of device parameters; this limits the ability to reach optimal settings for the device in a reasonable time. In one embodiment, a number of algorithms are used with simple independent assumptions regarding the model. Using these assumptions, each algorithm studies the model and makes predictions with a confidence. The most confident prediction is chosen at any point of time. This architecture helps reduce the complexity of the solution that otherwise would have been enormous. In other embodiments, lookup tables or other procedures may be utilized to perform the optimization, in much the same way as the algorithms described above. [0028] In this context, a user may be considered a black box with perceptual organs that can accept a signal as input and produce a signal as output in accordance with certain instructions. This method is useful for applications where the black box is too complex to be modeled non- stochastically, such as the human brain. Depending on the nature of the "black box," the instructions can be conveyed by different means. For example, a human might be told instructions in a natural language; an artificial agent might be programmed with the instructions.
[0029] Raw perception of a user is judged by some criteria that measure the actual output signal against the output signal expected from the application of the given set of instructions to the input signal. For example, if the input signals are spoken phonemes, the black box is a human brain with ears as the perceptual organs, and the instruction is to reproduce the input phonemes (as speech or in writing), the perception might be measured by computing the difference between the input and output phonemes. In another example, if the input signal is a set of letters written on a piece of paper, the black box is a human brain with eyes as the perceptual organs, and the instruction is to reproduce the letters (as speech or in writing), the perception might be measured by computing the difference between the input and output letters. It is assumed that the instructions have been correctly conveyed and are being followed by the black box. [0030] FIG. 2 depicts an exemplary relationship between the perceptual device D and the agent A. Given a user or an agent A, one or more devices D, an input signal Slnp , and a
corresponding output signal Sout that the agent has produced obeying certain instructions, FIG.
1 depicts the relationships:
Figure imgf000017_0001
A{Sj = Sout
■ A (D (Sj) = S0111
where Slnt is the intermediate signal or stimulus emanated from the device(s) and perceived by
the agent. In the case of a digital audio device, the stimulus is the sound actually heard by the user. The intermediate signal cannot be measured in the same way that Slnp and Sout are
susceptible of measurement. It is desired that S = Sout , hence A(D (.)) = / (.) where / (.) is
the identity function.
[0031] In a typical application of the current invention, almost nothing is known about the function A. The function D is characterized by the device parameters. Embodiments of the present invention (1) statistically model the perceptual errors (i.e., some metric applied to Slnp ~ Sout ) for an agent with respect to the device parameters, and (2) study this perceptual
model to predict the best set of parameter values. Ideally, the predicted parameter values render Slnp = Sout for any Slnp for the agent and the device. Thus, in general, the present
invention proposes a general method for estimating the function A(D (.)) where minimal
knowledge is available regarding function A.
[0032] In one embodiment of the present invention, a method is provided for automatically tuning the parameters of at least one perceptual device in a user- specific way. The agent or its environment is fitted with a device(s) whose parameters are preset, for example, to factory default values. The proposed method may be implemented as a computer program that tests the raw perception of the agent. FIG. 3 depicts one such implementation of the program 100. Based on the results of the test, the program 100 may suggest new parameter values along with an explanation of why such values are chosen and the confidence of the suggested set of values 102. The devices 104 are reset with the parameter values with the highest confidence or best explanation. If a human tester (for example, an audiologist fine tuning a digital hearing aid or cochlear implant (CI)) is conducting the test using the computer program 100, he might decide to disregard the suggested set of values and set his own values if he finds the suggested parameter values and the explanation not particularly useful. Such a decision on the part of the tester is based generally on the tester' s expert domain knowledge. In such a situation, the knowledge base 106 of the program 100 is updated with the knowledge of the expert used in determining an alternative set of values. At each iteration of the program 100, the agent 108 is tested with a new set of parameter values and, after testing, the program 100 suggests a new set of parameters. This procedure continues until a certain set of parameter values is obtained that helps the agent 108 perceive satisfactorily. Particularly advanced programs, utilizing a number of algorithms, may be able to suggest the optimum set of parameter values within a very short period of testing. Other programs may utilize lookup tables or other procedures to suggest the optimum set of parameters. [0033] The purpose of testing is to determine the raw perceptual ability, independent of context and background knowledge, of the agent 108. A series of input signals is presented to the agent 108 whose environment is fitted with at least one perceptual device 104 set to certain parameter values. In certain embodiments, the input signal may be of sufficient length, duration, complexity, etc. to exceed a single phoneme sound. Such a complex signal will ultimately elicit a responsive speech waveform from the agent. After each signal is presented, the agent 108 is given enough time to output a signal in response to its perceived signal, in accordance with instructions that the agent 108 has previously received. The output signal 110 corresponding to each input signal is recorded along with the time required for response. [0034] In the case of an output signal in the form of a complex speech waveform, the 5 speech waveform then may be processed as described above with regard to FIG. 1. For example, an ASR 150 may be used to process the speech waveform. First, the speech waveform is input into an acoustic feature extractor 152. The acoustic features are input into a processor for segmentation 154 which delimits phoneme boundaries in the speech waveform. A second extractor 156 then measures segment-based acoustic features for each phoneme of the o speech waveform. The resulting features are then used to in the following processes to compare the stimulus to the speech waveform, to optimize the perceptual device. A metric captures the difference between the input signal and the agent's response in a meaningful way such that a model 112 of the agent's perceptual ability can be incrementally constructed using that metric and the device parameters. 5 [0035] At the end of each iteration, the test set creator or generator 114, utilizing one or more algorithms, lookup tables, or other procedures, modifies the parameters based on information received during the test. The next set of input signals are chosen on which the agent 108 should be tested, based on its strengths and weaknesses as evident from the model 112. A new test starts with the perceptual devices 104 set to new parameter values, again,0 based on the application of the algorithm to the information. An increase in response time indicates that either the agent 108 is having difficulty in perception or the agent 108 is getting fatigued. In the latter case, the agent 108, tester, or program 100 may opt to rest before further testing. [0036] The model 112 describes the perceptual ability of the agent 108 with respect to the perceptual devices 104. Given an accurate model, one can predict the parameter values best suited for an agent 108. However, the model 112 is never complete until the agent 108 has been tested with all combinations of values for the parameters. Such testing is not feasible in a reasonable time for any complicated device. The model 112 is incremental and thus each prediction is based on the incomplete model derived prior to that iteration. [0037] FIG. 4 presents another embodiment of the present invention incorporating a knowledge base into the computer program 100 of FIG. 3. The knowledge base (KB) of the computer program 100 stores knowledge in two forms - declarative 120 and procedural 122. Declarative knowledge 120 is stored as a set of statements useful for predicting a new set of parameter values 132 based on the model of the agent's perceptual ability. An example of declarative knowledge would include a situation where the agent 108 is a human with hearing loss, the device 104 is a CI, and his model 112 shows that he is weak in hearing the middle range of the frequency spectrum. In this case, the declarative knowledge 120 would include a statement that more CI channels should be associated with frequencies in that middle range than the higher or lower frequency ranges. Declarative knowledge can be readily applied, wherever appropriate, to make an inference. Often a user's previously tested parameters and device parameters 134 may be utilized with the declarative knowledge. [0038] Procedural knowledge 122 is stored as procedures or algorithms that study the perceptual model 112 in order to make predictions for new parameter values. Each item of procedural knowledge is an independent algorithm 124 that studies the model 112 in a way which might involve certain assumptions about the model 112. These items of procedural knowledge may also utilize declarative knowledge 120 to study the model 112. Upon studying a model 112 and comparing it with the stored models of previously tested similar agents using similar devices, the algorithms may derive new rules 126 for storage as items of declarative knowledge 128. An example of procedural knowledge would include a situation where the agent is a human with hearing loss and the device is a CI. In this case, his model might be studied by an algorithm assuming that there exists a region in the model that represents the perceptual error minima of the agent. Hence, the algorithm will study the model hoping to find that minimum region and will predict appropriate parameter values for that minimum. [0039] For any complicated perceptual device, the number of adjustable parameters can be large. The number of tests required to tune these parameters may even increase exponentially with the number of device parameters. One of the challenges faced by the proposed method is to reduce the number of tests so that the time required for tuning the parameter values can be reduced to a practical time period. One way to make the process more efficient is to utilize procedural knowledge 122. In the depicted embodiment, a number of procedures, lookup tables, or algorithms 124 with very different assumptions are contemporaneously applied to the model 112. After application, each procedure provides its prediction of the parameters along with a confidence value for the prediction and an explanation of how the prediction was reached. These explanations are evaluated, either by a supervisory program or a tester, and that prediction that provides the best explanation is selected 130. By diversifying the assumptions used in studying the model 112, the chance of the method making inferior predictions may be significantly reduced. Since the different procedures essentially "compete" against each other, the resulting prediction is often better than the prediction reached by any single procedure operating alone. New items of procedural knowledge can be added to the system at will. Note that an ASR, as described with regard to FIG. 3, may also be utilized in this system 100 depicted in FIG. 4, to extract features from speech waveforms for further processing. [0040] FIG. 5 depicts an exemplary testing procedure 200 in accordance with one embodiment of the present invention. In this example, a user fitted with a CI is tested in the presence of an audiologist, who is monitoring the test. The program begins by generating an input signal 202. This input signal directs the CI to deliver a stimulus (e.g., a phoneme sound) to the user. Prior to sending the stimulus, however, the stimulus parameter value is accessed 204 by the program. This value may be either a factory default setting (usually when the device is first implanted), a previously stored suggested value, or a previously stored override value. The latter two values are described in more detail below. [0041] A stimulus based on the parameter is then delivered to the user 206. The program waits for an output signal from the user 208. This received output signal may take any form that is usable by the program. For example, the user may repeat the sound into a microphone, spell the sound in a keyboard, or press a button or select an icon that corresponds to their perception of the sound. Again, for more complex output signals, such as speech waveforms, features of such waveforms may be extracted and phonemes segmented prior to application of an optimization algorithm. The program notes the time T when the output signal is received. [0042] Upon receipt of the output signal from the user, the elapsed time is compared to a predetermined value 210. If the time exceeds this value, the program determines that the user is fatigued 212, and the program ends 214. If the elapsed time does not exceed the threshold, however, the output signal and stimulus are compared 216 to begin analysis of the results. The difference between the output signal from the user and the stimulus sent from the CI to the user are used to construct the perceptual model 218. Next, the program suggests a value for the next parameter to be tested 220.
[0043] At this point, the audiologist may optionally decide whether or not to utilize the suggested value 222 for the next test procedure, based on his or her knowledge base or other factors that may not be considered by the program. If the audiologist overrides the suggested value with a different value, this override value is stored 224 to be used for the next test. The program then determines if the test is complete 226, and may terminate the test 228 if required or desired by the user. [0044] The test may be determined to be complete for a number of reasons. For example, the user or audiologist may be given the option at this point (or at any point during the test) to terminate testing. The program may determine that during one or more iterations of the test, the user's response time, as measured in step 210, increased such that fatigue may be a factor, warranting termination of the testing. Additionally, the program may determine that, based on information regarding the tested device or the program itself, all iterations or options have been tested. In such a case, the program may determine that no further parameter adjustment would materially improve the operation of the device or the program. Also, the program may interpret inconsistent information at this point as indicative of an error condition that requires termination. Other procedures for terminating testing are known to the art. [0045] Returning to step 222, if the suggested value is accepted, this value is then stored for later use in a subsequent test 230. In an alternative embodiment of the program, the program may be operated without the assistance of an audiologist. In this case, acceptance of the suggested value would be the default response to the suggested value. In this way, the test may be utilized without the involvement of an audiologist. Thus, the program, with few modifications, could allow the user to self-tune his device remotely, potentially over an internet connection or with a stand-alone tuning device. After the suggested value is stored, a determination to continue the test 232 (having similar considerations as described in step 226), may be made prior to ending the test 234. [0046] The optimization methods of the current invention may be utilized with virtually any metric that may be used to test people that utilize digital hearing devices. One such metric is disclosed in, for example, U.S. Patent No. 7,206,416 to Krause et al., the entire disclosure of which is hereby incorporated by reference herein in its entirety, and will be discussed herein as 5 one exemplary application of the optimization methods.
[0047] A typical testing system 300 is depicted in FIG. 6. The testing procedure tests the raw hearing ability, independent of context and background knowledge, of a hearing-impaired person. As the procedure begins, an input signal 302 is generated and sent to a digital audio device, which, in this example, is a CI 304. Based on the input signal, the CI will deliver an o intermediate signal or stimulus 306, associated with one or more parameters, to a user 308. At the beginning of a test procedure, the parameters may be factory-default settings. At later points during a test, the parameters may be otherwise defined, as described below. In either case, the test procedure utilizes the stored parameter values to define the stimulus (i.e., the sound). 5 [0048] After a signal is presented, the user is given enough time to make a sound signal (or speak a string of sounds sufficient to form a speech waveform) representing what he heard. The output signal corresponding to each input signal is recorded along with the response time. If the response time exceeds a predetermined setting, the system determines that the person may be getting fatigued and will stop the test. The output signal 310 may be a sound repeated0 by the user 308 into a microphone 312. The resulting analog signal 314 is converted by an analog/digital converter 316 into a digital signal 318 delivered to the processor 320. Alternatively, the user 308 may type a textual representation of the sound heard into a keyboard 322. In the processor 320, the output signal 310 is stored and compared to the immediately preceding stimulus. [0049] Based on the user response, an algorithm, lookup table, or other procedure, decides the user's strengths and weaknesses and stores this information in an internal perceptual model. Additionally, the algorithm suggests a value for the next test parameter, effectively choosing the next input sound signal to be presented. This new value is delivered via the output module 324. If an audiologist is administering the test, the audiologist may choose to ignore the suggested value, in favor of their own suggested value. In such a case, the tester's value would be entered into the override module 326. Whether the suggested value or the tester's override value is utilized, this value is stored in a memory for later use (likely in the next test). These tests may be repeated with different sounds, words, sentences, or other stimuli until the CI performance is optimized or otherwise modified, the user fatigues, etc. In one embodiment, the test terminates when the user's strengths and weaknesses with respect to the current CI device parameters are comprehensively determined. A new test starts with the CI device set to new parameter values. [0050] The disclosed system utilizes any number of algorithms that may operate substantially or completely in parallel to suggest parameter values in real time. Exemplary algorithms include (1) computing a reduced set of phonemes (input sound signals) for testing a person based on his strengths and weaknesses from past tests and using the features of the phonemes, thereby reducing testing time considerably; (2) computing a measure of performance for a person from his tests involving features of phonemes and their weights; (3) classifying a person based on their strengths and weaknesses as obtained from previous tests; and (4) predicting the parameter setting of a CI device to achieve optimum hearing for a person using his perceptual model and similar people's optimal device settings. In addition to these algorithms, other embodiments utilize alternative methodologies or procedures to compute parameter values. For example, predetermined parameter values may be selected from a lookup table containing parameter value combinations based on a person's known or predicted strengths and weaknesses based on results from tests.
[0051] In human language, a phoneme is the smallest unit of distinguishable speech. Phonemes may be utilized in testing. For example, the input signal may be chosen from a set of phonemes from the Iowa Medial Consonant Recognition Test. Both consonant phonemes and vowel phonemes may be used during testing, though vowel phonemes may have certain disadvantages in testing: they are too easy to perceive and typically do not reveal much about the nature of hearing loss. It is known that each phoneme is characterized by the presence, absence or irrelevance of a set of nine features - Vocalic, Consonantal, Compact, Grave, Flat, Nasal, Tense, Continuant, and Strident. These features are arranged hierarchically such that errors in recognizing a feature "higher" up in the hierarchy would result in more speech recognition problems because it would affect a greater number of phonemes. [0052] A person' s performance in a test can be measured by the number of input sound signals (i.e., phonemes, although actual words, phrases, sentences, or other language constructs in any language may also be used) he fails to perceive. This type of basic testing, however, may fail to capture the person's strengths and weaknesses because many phonemes share similar features. For example, the phonemes '\f and '\p' differ only in one out of the nine features called Continuant. A person who fails to perceive '\p' due to an error in any feature other than Continuant will also fail to perceive '\f and vice versa. Thus, counting the number of phoneme errors would obtain less accurate results because feature errors are giving rise to phoneme errors. Due to the same reason, in order to reduce the phoneme errors, it may be desirable to focus testing on the feature errors.
[0053] In the present invention, a person's performance in a test is measured by the weighted mean of the feature errors, given by: ∑WΛ
<5 = -I≡i
S 9 (ϋ)
w, i=l
where W1 is the weight and nt is the number of errors in the ith feature of the hierarchy. The
5 weights of the features are experimentally ascertained to be {0.151785714, 0.151785714, 0.142857143, 0.098214286, 0, 0.142857143, 0.125, 0.125, 0.0625}. Other weights may be utilized as the testing procedures evolve for a given user or group of users. The actual weight utilized in experimentation to optimize may include other values and potentially may be dependent upon testing, the language being used, and other variables. Acceptable results may o be obtained utilizing other weightings.
[0054] This manner of testing provides a weighted error representing the user's performance with a set of parameter values. If a person is tested with all possible combinations of parameter values, the result can be represented as a weighted error surface in a high-dimensional space, where the dimension is one more than the number of parameters being 5 considered. In this error surface, there exists a global minimum and one or more local minima. In general, while the person's performance is good at each of these local minima, his performance is the best at the global minimum. One task of the computer program is to predict the location of the global minimum or at least a good local minimum within a short period of testing. 0 [0055] The perceptual model may be represented in a number of ways, such as using a surface model, a set of rules, a set of mathematical/logical equations and inequalities, and so on, to obtain results. In the case of the surface model, due to the presence of many parameters, a very high-dimensional error surface may be formed. The minimum amount of data required to model such a surface increases exponentially with the number of dimensions leading to the so-called "curse of dimensionality." There is therefore an advantage to reducing the number of parameters. In one embodiment, the large number of parameters are reduced to three - "stimulation rate," "Q-value," and "map number." The stimulation rate and Q-value can dramatically change a person's hearing ability. The map number is an integer that labels the map and includes virtually all device parameters along with a frequency allocation table. Changing any parameter value or frequency allocation to the different channels would constitute a new map with a new map number. Thus, the error surface is reduced to a four-dimensional space, thereby considerably reducing the minimum amount of data required to model the surface. Each set of three parameter values constitutes a point. Only points at which a person has been tested, called sampled points, have a corresponding weighted error. The error surface is constituted of sampled points. Adjusting parameters to reduce errors in one feature may lead to an increase in error in another feature. In order to adjust parameters such that the overall performance is enhanced, one should strive to reduce the total weighted error as described by equation (i) .
[0056] In the embodiments described above, the software may be configured to run on any computer or workstation such as a PC or PC-compatible machine, an Apple Macintosh, a Sun workstation, etc. In general, any device can be used as long as it is able to perform all of the functions and capabilities described herein. The particular type of computer or workstation is not central to the invention, nor is the configuration, location, or design of the database, which may be flat-file, relational, or object-oriented, and may include one or more physical and/or logical components.
[0057] The servers may include a network interface continuously connected to the network, and thus support numerous geographically dispersed users and applications. In a typical implementation, the network interface and the other internal components of the servers intercommunicate over a main bi-directional bus. The main sequence of instructions effectuating the functions of the invention and facilitating interaction among clients, servers and a network, can reside on a mass-storage device (such as a hard disk or optical storage unit) as well as in a main system memory during operation. Execution of these instructions and effectuation of the functions of the invention is accomplished by a central-processing unit ("CPU").
[0058] A group of functional modules that control the operation of the CPU and effectuate the operations of the invention as described above can be located in system memory (on the server or on a separate machine, as desired). An operating system directs the execution of low- level, basic system functions such as memory allocation, file management, and operation of mass storage devices. At a higher level, a control block, implemented as a series of stored instructions, responds to client-originated access requests by retrieving the user-specific profile and applying the one or more rules as described above. [0059] While there have been described herein what are to be considered exemplary and preferred embodiments of the present invention, other modifications of the invention will become apparent to those skilled in the art from the teachings herein. The particular methods of manufacture and geometries disclosed herein are exemplary in nature and are not to be considered limiting. It is therefore desired to be secured in the appended claims all such modifications as fall within the spirit and scope of the invention. Accordingly, what is desired to be secured by Letters Patent is the invention as defined and differentiated in the following claims. [0060] What is claimed is :

Claims

1 1. A method for modifying a controllable stimulus generated by a perceptual device in
2 communication with a human user, the method comprising:
3 generating an input signal to the perceptual device, the perceptual device sending a
4 stimulus to the human user, the stimulus defined at least in part by a parameter, the parameter
5 comprising a value;
6 receiving an output signal from the human user, the output signal based at least in part
7 on a perception of the stimulus by the human user;
8 determining a difference between the input signal and the output signal;
9 constructing a perceptual model based at least in part on the difference; and
1 o suggesting a value for the parameter based at least in part on the perceptual model.
1 2. The method of claim 1 wherein suggesting a value further comprises utilizing a
2 knowledge base.
1 3. The method of claim 2 wherein the knowledge base comprises at least one of
2 declarative knowledge and procedural knowledge.
1 4. The method of claim 1 further comprising generating a second input signal to the
2 perceptual device based at least in part on the perceptual model.
1 5. The method of claim 1 wherein the input signal is an audio signal.
1 6. The method of claim 1, wherein the perceptual device is a digital audio device.
1 7. A system for modifying a controllable stimulus generated by a perceptual device in
2 communication with a human user, the system comprising:
3 a test set generator for generating a test set to the perceptual device, the perceptual
4 device sending a stimulus to the human user, the stimulus defined at least in part by a
5 parameter, the parameter comprising a value;
6 a signal receiver for receiving an output signal from the human user, the output signal
7 based at least in part on a perception of the stimulus by the human user;
8 a perceptual model module for constructing a perceptual model based at least in part on
9 the difference; and
I o a parameter generator for suggesting a value for the parameter based at least in part on
I 1 the perceptual model.
1 8. The system of claim 7 further comprising a second signal generator for generating a
2 second input signal to the perceptual device based at least in part on the perceptual model.
1 9. The system of claim 7 further comprising a storage module for storing information used
2 in the construction of the perceptual model.
1 10. The system of claim 9 wherein the information stored in the storage module comprises a
2 knowledge base.
1 11. The system of claim 7 further comprising a rule extraction module for formulating a
2 rule based at least in part on the perceptual model.
1 12. The system of claim 9 wherein the parameter generator suggests a value for the
2 parameter based at least in part on at least one of information obtained from the storage module
3 and information obtained from the perceptual model module.
1 13. The system of claim 8 wherein the signal generator comprises the second signal
2 generator.
1 14. The system of claim 7 wherein the input signal is an audio signal.
1 15. An article of manufacture having computer-readable portions embodied thereon for
2 modifying a controllable stimulus generated by a perceptual device in communication with a
3 user, the article comprising:
4 computer readable instructions for providing an input signal to the perceptual device,
5 the perceptual device sending a stimulus to the human user, the stimulus defined at least in part
6 by a parameter, the parameter comprising a value;
7 computer readable instructions for receiving an output signal from the agent, the output
8 signal based at least in part on a perception of the stimulus by the human user;
9 computer readable instructions for determining a difference between the input signal
I o and the output signal;
I 1 computer readable instructions for constructing a perceptual model based at least in part
12 on the difference; and
13 computer readable instructions for suggesting a value for the parameter based at least in
14 part on the perceptual model.
16. The article of manufacture of claim 15, further comprising computer readable instructions for providing a second input signal to the perceptual device based at least in part on the perceptual model.
17. The article of manufacture of claim 15, wherein the input signal is an audio signal.
18. A method of tuning a perceptual device from a speech waveform, the method comprising the steps of: inputting a speech waveform from a user response to a stimulus; extracting at least one first acoustic feature from the waveform; segmenting at least one phoneme from the at least one first acoustic feature; extracting at least one second acoustic feature from the at least one phoneme; comparing the speech waveform to a stimulus; and determining at least one parameter value for the perceptual device.
19. The method of claim 18, further comprising the steps of: transmitting a stimulus to a user; and receiving a user response based at least in part on the stimulus.
20. The method of claim 18, wherein the at least one first acoustic feature is extracted utilizing a frame-based procedure.
21. The method of claim 18, wherein the at least one second acoustic feature is extracted utilizing a segment-based procedure.
22. The method of claim 19, further comprising the step of determining an error comprising a difference between the speech waveform and the stimulus.
23. The method of claim 22, wherein the error is equal to
∑^ χ|/s,, - /r>, ξ{s ~ r) = -
∑w, where W1 is the weight of the /ώ feature, fs ι and fr ι are the /ώ features of the stimulus and response respectively, and I I denotes a distance measure.
24. The method of claim 23, wherein the distance measure comprises a Mahalanobis distance.
25. An article of manufacture having computer- readable program portions embedded thereon for tuning a perceptual device from a speech waveform, the program portions comprising: instructions for inputting a speech waveform from a user response to a stimulus; instructions for extracting at least one first acoustic feature from the waveform; instructions for segmenting at least one phoneme from the at least one first acoustic feature; instructions for extracting at least one second acoustic feature from the at least one phoneme; instructions for comparing the speech waveform to a stimulus; and instructions for determining at least one parameter value for the perceptual device.
26. A system for tuning a perceptual device from a speech waveform, the system comprising: a receiver for receiving a speech waveform from a user response to a stimulus; an first extractor for extracting at least one first acoustic feature from the waveform; a first processor for segmenting at least one phoneme from the at least one first acoustic feature; a second extractor for extracting at least one second acoustic feature from the at least one phoneme; a second processor for comparing the speech waveform to a stimulus; and a third processor for determining at least one parameter value for the perceptual device.
27. The system of claim 26, further comprising a transmitter for transmitting a stimulus to a user.
28. The system of claim 27, further comprising a fourth processor for determining an error comprising a difference between the speech waveform and the stimulus.
29. The system of claim 28, further comprising a system processor comprising the first extractor, the first processor, the second extractor, the second processor, the third processor, and the fourth processor.
PCT/US2009/052633 2008-08-04 2009-08-04 Automatic performance optimization for perceptual devices WO2010017156A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2009279764A AU2009279764A1 (en) 2008-08-04 2009-08-04 Automatic performance optimization for perceptual devices
EP09791124A EP2321981A1 (en) 2008-08-04 2009-08-04 Automatic performance optimization for perceptual devices

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US12/185,394 US8755533B2 (en) 2008-08-04 2008-08-04 Automatic performance optimization for perceptual devices
US12/185,394 2008-08-04
US16445309P 2009-03-29 2009-03-29
US61/164,453 2009-03-29

Publications (1)

Publication Number Publication Date
WO2010017156A1 true WO2010017156A1 (en) 2010-02-11

Family

ID=41401791

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/052633 WO2010017156A1 (en) 2008-08-04 2009-08-04 Automatic performance optimization for perceptual devices

Country Status (3)

Country Link
EP (1) EP2321981A1 (en)
AU (1) AU2009279764A1 (en)
WO (1) WO2010017156A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2924676A1 (en) 2014-03-25 2015-09-30 Oticon A/s Hearing-based adaptive training systems
US9426599B2 (en) 2012-11-30 2016-08-23 Dts, Inc. Method and apparatus for personalized audio virtualization
US9794715B2 (en) 2013-03-13 2017-10-17 Dts Llc System and methods for processing stereo audio content
US10198964B2 (en) 2016-07-11 2019-02-05 Cochlear Limited Individualized rehabilitation training of a hearing prosthesis recipient
CN112424863A (en) * 2017-12-07 2021-02-26 Hed科技有限责任公司 Voice perception audio system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2349626A1 (en) * 1973-10-03 1975-04-10 Bosch Elektronik Gmbh Speech audiometer coupled with tape recorder - displays associated group of similar sounding words with each word from tape recording
EP0714069A2 (en) * 1994-11-24 1996-05-29 Matsushita Electric Industrial Co., Ltd. Optimization adjusting method and optimization adjusting apparatus
US5729658A (en) * 1994-06-17 1998-03-17 Massachusetts Eye And Ear Infirmary Evaluating intelligibility of speech reproduction and transmission across multiple listening conditions
WO1999031937A1 (en) * 1997-12-12 1999-06-24 Knowles Electronics, Inc. Automatic system for optimizing hearing aid adjustments
US20050027537A1 (en) * 2003-08-01 2005-02-03 Krause Lee S. Speech-based optimization of digital hearing devices
US20060045281A1 (en) * 2004-08-27 2006-03-02 Motorola, Inc. Parameter adjustment in audio devices

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2349626A1 (en) * 1973-10-03 1975-04-10 Bosch Elektronik Gmbh Speech audiometer coupled with tape recorder - displays associated group of similar sounding words with each word from tape recording
US5729658A (en) * 1994-06-17 1998-03-17 Massachusetts Eye And Ear Infirmary Evaluating intelligibility of speech reproduction and transmission across multiple listening conditions
EP0714069A2 (en) * 1994-11-24 1996-05-29 Matsushita Electric Industrial Co., Ltd. Optimization adjusting method and optimization adjusting apparatus
WO1999031937A1 (en) * 1997-12-12 1999-06-24 Knowles Electronics, Inc. Automatic system for optimizing hearing aid adjustments
US20050027537A1 (en) * 2003-08-01 2005-02-03 Krause Lee S. Speech-based optimization of digital hearing devices
US20060045281A1 (en) * 2004-08-27 2006-03-02 Motorola, Inc. Parameter adjustment in audio devices

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LAWRENCE R RABINER: "A Tutorial on Hidden Markov Models and selected Applications in Speech Recognition", PROCEEDINGS OF THE IEEE, IEEE. NEW YORK, US, vol. 77, no. 2, 1 February 1989 (1989-02-01), pages 257 - 286, XP002550447, ISSN: 0018-9219 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9426599B2 (en) 2012-11-30 2016-08-23 Dts, Inc. Method and apparatus for personalized audio virtualization
US10070245B2 (en) 2012-11-30 2018-09-04 Dts, Inc. Method and apparatus for personalized audio virtualization
US9794715B2 (en) 2013-03-13 2017-10-17 Dts Llc System and methods for processing stereo audio content
EP2924676A1 (en) 2014-03-25 2015-09-30 Oticon A/s Hearing-based adaptive training systems
US10198964B2 (en) 2016-07-11 2019-02-05 Cochlear Limited Individualized rehabilitation training of a hearing prosthesis recipient
CN112424863A (en) * 2017-12-07 2021-02-26 Hed科技有限责任公司 Voice perception audio system and method
CN112424863B (en) * 2017-12-07 2024-04-09 Hed科技有限责任公司 Voice perception audio system and method

Also Published As

Publication number Publication date
AU2009279764A1 (en) 2010-02-11
EP2321981A1 (en) 2011-05-18

Similar Documents

Publication Publication Date Title
US20220240842A1 (en) Utilization of vocal acoustic biomarkers for assistive listening device utilization
US10997970B1 (en) Methods and systems implementing language-trainable computer-assisted hearing aids
EP3709115B1 (en) A hearing device or system comprising a user identification unit
US8433568B2 (en) Systems and methods for measuring speech intelligibility
US7206416B2 (en) Speech-based optimization of digital hearing devices
US9666181B2 (en) Systems and methods for tuning automatic speech recognition systems
US20210030371A1 (en) Speech production and the management/prediction of hearing loss
US8755533B2 (en) Automatic performance optimization for perceptual devices
CN109951783A (en) For the method based on pupil information adjustment hearing aid configuration
US20230412995A1 (en) Advanced hearing prosthesis recipient habilitation and/or rehabilitation
US20180012511A1 (en) Individualized rehabilitation training of a hearing prosthesis recipient
WO2010017156A1 (en) Automatic performance optimization for perceptual devices
US20210321208A1 (en) Passive fitting techniques
Karbasi et al. ASR-based speech intelligibility prediction: A review
US20210264937A1 (en) Habilitation and/or rehabilitation methods and systems
US20220076663A1 (en) Prediction and identification techniques used with a hearing prosthesis
Legrand et al. Interactive evolution for cochlear implants fitting
CN116171181A (en) Novel tinnitus management technology
US8401199B1 (en) Automatic performance optimization for perceptual devices
Van Zyl Objective determination of vowel intelligibility of a cochlear implant model
Leyla Neural response based speaker identification under noisy condition/Leyla Roohisefat
Roohisefat Neural Response Based Speaker Identification Under Noisy Condition
WO2010025356A2 (en) System and methods for reducing perceptual device optimization time

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09791124

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2009279764

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2009791124

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2009279764

Country of ref document: AU

Date of ref document: 20090804

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE