US20080132295A1 - System and method for improved loudspeaker functionality - Google Patents

System and method for improved loudspeaker functionality Download PDF

Info

Publication number
US20080132295A1
US20080132295A1 US11/634,817 US63481706A US2008132295A1 US 20080132295 A1 US20080132295 A1 US 20080132295A1 US 63481706 A US63481706 A US 63481706A US 2008132295 A1 US2008132295 A1 US 2008132295A1
Authority
US
United States
Prior art keywords
loudspeaker
signal
sense element
audio signal
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/634,817
Other versions
US8311590B2 (en
Inventor
Ronald J. Horowitz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Palm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US11/634,817 priority Critical patent/US8311590B2/en
Application filed by Palm Inc filed Critical Palm Inc
Assigned to PALM, INC. reassignment PALM, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HOROWITZ, RONALD J.
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. SECURITY AGREEMENT Assignors: PALM, INC.
Publication of US20080132295A1 publication Critical patent/US20080132295A1/en
Assigned to PALM, INC. reassignment PALM, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PALM, INC.
Publication of US8311590B2 publication Critical patent/US8311590B2/en
Application granted granted Critical
Assigned to PALM, INC. reassignment PALM, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PALM, INC.
Assigned to PALM, INC. reassignment PALM, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PALM, INC.
Assigned to QUALCOMM INCORPORATED reassignment QUALCOMM INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD COMPANY, HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., PALM, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/45Prevention of acoustic reaction, i.e. acoustic oscillatory feedback
    • H04R25/453Prevention of acoustic reaction, i.e. acoustic oscillatory feedback electronically
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M9/00Arrangements for interconnection not involving centralised switching
    • H04M9/08Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic
    • H04M9/082Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic using echo cancellers

Definitions

  • Some electronic devices have speakerphone capabilities by use of a microphone and loudspeaker. Speakerphones require some form of acoustic separation between the microphone and loudspeaker to prevent echo and other interference with the microphone. Also, speakerphones suffer from nonlinearities in their audio output caused by such things as distortion and non-linear frequency response.
  • FIG. 1 is a front view of a mobile computing device, according to an exemplary embodiment
  • FIG. 2 is a back view of the mobile computing device of FIG. 1 , according to an exemplary embodiment
  • FIG. 3 is a block diagram of the mobile computing device of FIG. 1 , according to an exemplary embodiment
  • FIG. 4 is a block diagram of a system for reducing acoustical feedback from a loudspeaker to a microphone, according to an exemplary embodiment
  • FIG. 5 is a block diagram of a system for reducing distortion in an audible signal provided by a loudspeaker, according to an exemplary embodiment
  • FIG. 6 is a block diagram of a system for improved loudspeaker functionality, according to an exemplary embodiment
  • FIG. 7 is a flowchart showing a method for reducing acoustical feedback from a loudspeaker to a microphone, according to an exemplary embodiment.
  • FIG. 8 is a flowchart showing a method for reducing distortion in an audible signal provided by a loudspeaker, according to an exemplary embodiment.
  • Device 10 is a smart phone, which is a combination mobile telephone and handheld computer having personal digital assistant functionality.
  • the teachings herein can be applied to other mobile computing devices (e.g., a laptop computer) or other electronic devices (e.g., a desktop personal computer, home or car audio system, etc.).
  • Personal digital assistant functionality can comprise one or more of personal information management, database functions, word processing, spreadsheets, voice memo recording, etc.
  • a smart phone is configured to synchronize personal information from one or more applications with a remote computer (e.g., desktop, laptop, server, etc.).
  • Device 10 is further configured to receive and operate additional applications provided to device 10 after manufacture, e.g., via wired or wireless download, memory card, etc.
  • Device 10 comprises a display 12 and a user input device 14 (e.g., a QWERTY keyboard, buttons, touch screen, speech recognition engine, etc.).
  • Device 10 also comprises a speaker 15 (e.g., an earpiece speaker).
  • Speaker 15 may be a speaker configured to provide audio output with a volume suitable for a user placing speaker 15 against or near the ear.
  • Speaker 15 may be a part of an electrodynamic receiver, such as part number 419523 manufactured by Foster Electric Co., Ltd., Japan.
  • Speaker 15 may be positioned above display 12 or in another location on device 10 .
  • Device 10 comprises a housing 11 having a front side 13 and a back side 17 ( FIG. 2 ).
  • Speaker 15 may be positioned on the front side 13 along with display 12 and user input device 14 , and a loudspeaker 16 (or other speaker or transducer) may be positioned on the back side along with a battery compartment 19 . Positioning loudspeaker 16 on back side 17 may be advantageous when using a directional sense element 21 on front side 13 .
  • Device 10 further comprises a sense element 21 (e.g., a microphone, such as a surface mount or other microphone, or other acoustic sense element) coupled to a bottom edge 23 of housing 11 .
  • Device 10 further comprises a sense element 25 (e.g., a feedback sense element, which may also be a microphone or other acoustic sense element, such as an infrared sensor, which may use Doppler interferometry) configured to sense an audible signal provided by loudspeaker 16 .
  • display 12 , user input device 14 , speaker 15 , loudspeaker 16 , and sense elements 21 , 25 may each be positioned anywhere on front side 13 , back side 17 or the edges therebetween.
  • Loudspeaker 16 is an electro-acoustic transducer that converts electrical signals into sounds loud enough to be heard at a distance. Loudspeaker 16 can be used for a speakerphone functionality. While loudspeaker 16 may be configured to produce audio output at a plurality of different volumes, it is typically configured to produce audio output at a volume suitable for a user to comfortably hear at some distance from the speaker, such as a few inches to a few feet away. Loudspeaker 16 may be an electrodynamic loudspeaker, such as part number HDR 9164, manufactured by Hosiden Corporation, Okasa, Japan.
  • device 10 comprises a processing circuit 20 comprising a processor 22 .
  • Processing circuit 20 can comprise one or more microprocessors, microcontrollers, and other analog and/or digital circuit components configured to perform the functions described herein.
  • Processing circuit 20 comprises memory (e.g., random access memory, read only memory, flash, etc.) configured to store software applications provided during manufacture or subsequent to manufacture by the user or by a distributor of device 10 .
  • processor 22 can comprise a first, applications microprocessor configured to run a variety of personal information management applications, such as calendar, contacts, e-mail, etc., and a second, radio processor on a separate chip (or as part of a dual-core chip with the application processor).
  • the radio processor is configured to operate telephony and/or data communication functionality.
  • Device 10 can be configured to use the radio processor for cellular radio telephone communication, such as Code Division Multiple Access (CDMA), Global System for Mobile Communications (GSM), Third Generation (3G) systems such as Wide-Band CDMA (WCDMA), or other cellular radio telephone technologies.
  • Device 10 can further be configured to use the radio processor for data communication functionality, for example, via GSM with General Packet Radio Service (GPRS) systems (GSM/GPRS), CDMA/1XRTT systems, Enhanced Data Rates for Global Evolution (EDGE) systems, Evolution Data Only or Evolution Data Optimized (EV-DO), and/or other data communication technologies.
  • GPRS General Packet Radio Service
  • EDGE Enhanced Data Rates for Global Evolution
  • EV-DO Evolution Data Only or Evolution Data Optimized
  • Device 10 comprises a transceiver circuit 24 which comprises analog and/or digital electrical components configured to receive and transmit wireless signals via antenna 28 to provide cellular telephone and/or data communications with a fixed wireless access point, such as a cellular telephone tower, in conjunction with a network carrier, such as, Verizon Wireless, Sprint, etc.
  • Device 10 can further comprise circuitry to provide communication over a wide area network, such as WiMax, a local area network, such as Ethernet or according to an IEEE 802.11x standard or a personal area network, such as a Bluetooth or infrared communication technology.
  • Display 12 can comprise a touch screen display in order to provide user input to processor 22 to control functions, such as to dial a telephone number, enable/disable speakerphone audio, provide user inputs regarding increasing or decreasing the volume of audio provided through speaker 15 and/or loudspeaker 16 , etc.
  • user input device 14 (which can comprise one or more buttons, switches, dials, a track ball, a four-way or five-way switch, etc.) can provide similar inputs as those of touch screen display 12 .
  • Device 10 can further comprise a stylus 30 to assist the user in making selections on display 12 .
  • Processor 22 can further be configured to provide video conferencing capabilities by displaying on display 12 video from a remote participant to a video conference, by providing a video camera on device 10 for providing images to the remote participant, by providing text messaging, two-way audio streaming in full- and/or half-duplex mode, etc.
  • Sense element 21 is configured to receive audio signals, such as voice signals, from a user or other person in the vicinity of device 10 , typically by way of spoken words.
  • Sense element 21 is configured as an electro-acoustic sense element to provide audio signals from the vicinity of device 10 and to convert them to an electrical signal to provide to processor 22 .
  • Processor 22 can provide a digital voice recorder function, wireless telephone function, push-to-talk function, etc. with audible words spoken into sense element 21 .
  • Processor 22 may also provide speech recognition and/or voice control of features operable on device 10 with audible words spoken into sense element 21 .
  • an speaker driver circuit 32 and a loudspeaker driver circuit 34 are provided, which may comprise analog and/or digital circuitry configured to receive audio data from processor 22 and to provide filtering, signal processing, equalizer functions, or other audio signal processing steps to audio data.
  • the incoming audio data can comprise one or more of a downlink signal received by transceiver circuit 24 from a remote participant to a telephone call or a video conference, prerecorded audio, or audio from a game or audio file stored on device 10 , etc.
  • Drivers 32 , 34 may then provide the audio data to speaker 15 and/or loudspeaker 16 to provide the audio to a user or another person in the vicinity of device 10 .
  • Drivers 32 , 34 may be part no. TPA6203A1, manufactured by Texas Instruments Inc., Dallas, Tex.
  • Transceiver circuit 24 is configured to receive an audio signal from a remote computing device, which can be a downlink audio signal 40 .
  • Downlink audio signal 40 can be provided by processing circuit to loudspeaker 16 .
  • Loudspeaker 16 is configured to provide an audible signal based on the downlink audio signal.
  • Sense element 25 is configured to sense the audible provided by the loudspeaker 16 .
  • Sense element 25 is positioned at a distance from loudspeaker 16 different than a distance between sense element 21 and loudspeaker 16 . The distance may be greater or less than the distance between sense element 21 and loudspeaker 16 .
  • Sense element 25 is configured to provide a sensed signal to processing circuit 20 which comprises an audio echo canceller 42 .
  • Echo canceller 42 can comprise analog and/or digital electronics configured to provide an echo cancellation process to a first audio signal 44 provided by sense element 21 .
  • Echo canceller 42 can comprise computer instructions stored on the computer-readable medium, such as a microprocessor, digital signal processor, etc.
  • Echo canceller 42 can further provide a suitable delay in echo cancellation output signals provided by processor 22 to summation circuit 46 . The delay can be predetermined based on the acoustical characteristics of the acoustic path between loudspeaker 16 and sense element 25 and the acoustic characteristics of the acoustic path between loudspeaker 16 and sense element 21 .
  • Summation circuit 46 is configured to process first audio signal 44 by, for example, attenuating or reducing that portion of first audio signal 44 comprising feedback or other audio signals from loudspeaker 16 .
  • the processed signal is provided by summation circuit 46 as an output signal (e.g., an uplink audio signal 48 ) for uplink to transceiver circuit 24 for a wireless telephony communication.
  • the functions of echo canceller 42 and summation circuit 46 can be combined in a single algorithm or processor, or may be provided by separate circuit components.
  • Sense element 25 may be positioned at any of a plurality of locations on or coupled to housing 11 or components within housing 11 , such as a portion of housing, printed circuit board, display board, etc.
  • Sense element 25 can comprise a unidirectional, omni-directional, or other type of microphone.
  • Sense element 25 can be closer to loudspeaker 16 than sense element 21 , and may be positioned within 1 centimeter, 2 centimeters, or within 5 centimeters, or more of loudspeaker 16 .
  • Sense element 21 can be positioned more than 3 centimeters from loudspeaker 16 .
  • Sense element 21 may be closer than 3 centimeters to loudspeaker 16 when sense element 21 is disposed on or near an opposite side of housing from loudspeaker 16 .
  • the further sense element 25 is positioned from loudspeaker 16 , the less time processing circuit 20 will have to calculate corrections, such as echo cancellation.
  • sense element 25 may be positioned further from loudspeaker 16 than sense element 21
  • sense elements 21 and 25 are at least 1 centimeter apart from one another, 2 centimeters apart, or 5 centimeters or more apart, regardless of where they are positioned on or around device 10 .
  • sense elements 21 and 25 are not the same model sense elements.
  • sense elements 21 and 25 can have at least one different characteristic or a plurality of different characteristics, wherein the characteristics can comprise frequency response, self noise or equivalent noise, maximum sound pressure level (SPL), clipping level, dynamic range, and sensitivity.
  • SPL maximum sound pressure level
  • Sense elements 21 and 25 can further be configured to receive an audible signal from loudspeaker 16 out of phase with each other, wherein sense element 21 is configured to receive the audible signals in the first phase, sense element 25 is configured to receive audible signals in the second phase, and the first phase is different than the second phase. Phases can further be substantially different. For echo cancellation, phase can be changed by inverting the signal received by one of sense elements 21 , 25 if sense elements 21 , 25 are roughly equidistant from loudspeaker 16 . If sense elements 21 , 25 are at different distances from loudspeaker 16 , the difference in phase angle may increase with increasing frequency. Processor 20 can be configured to correct for these phase differences as part of the echo canceling process.
  • sense elements 21 and 25 have a known phase relationship with signals received from loudspeaker 16 , and have other known acoustical characteristics at the time of manufacture of device 10 . According to a further advantage, in one embodiment, sense element 25 and processing circuit 20 need not be tuned to the resonant frequency of loudspeaker 16 .
  • downlink audio signal 40 is provided to processor 22 .
  • Processor 22 can comprise digital and/or analog circuit components and/or software instructions configured to process downlink audio signal 40 for playing via loudspeaker 16 .
  • Processor 22 can comprise a digital signal processor, a negative feedback circuit, a feed forward circuit, etc.
  • Processor 22 can further comprise echo cancellation, filtering, or other processing functions.
  • Processor 22 is configured to provide the processed audio signal to an amplifier 50 which can be a variable amplifier configured to be controlled by user input or an application to adjust volume.
  • Amplifier 50 is configured to provide the amplified signal to loudspeaker 16 to provide an audible signal.
  • Sense element 25 is configured to sense the audible signal provided by loudspeaker 16 and to provide a sensed signal 52 to processor 22 .
  • Processor 22 is configured to process the downlink audio signal 40 based on sensed signal 52 .
  • processor 22 may be configured to linearize the audible signal output by loudspeaker 16 based on sensed signal 52 . Linearization can be used to reduce nonlinearities in the output of loudspeaker 16 . Linearization may comprise taking any characteristic with curves or lumps in it and providing a flatter, more linear output characteristic.
  • a curve of distortion versus amplitude may have a pronounced shoulder region (e.g. due to mechanical or magnetic non-linearity).
  • variations in sensitivity versus frequency e.g. due to various resonances
  • Linearization may also refer to linearizing the speaker in terms of flat frequency response.
  • Processor 22 may be configured to reduce, attenuate, or eliminate any of these or other types of nonlinearities.
  • a feedback-type linearization scheme may use adaptive and/or predictive algorithms to provide complementary pre-distortion to or compression of the output signal. By linearizing the output of loudspeaker 16 , distortion can be reduced.
  • Processor 22 can further be configured to provide echo cancellation, non-linear processing, noise gating, etc.
  • antenna 28 is configured to receive a wireless telephony signal (or other wireless signal) and to provide a signal comprising audio data (which may be a teleconference, video conference, etc.) to transceiver circuit 24 .
  • Transceiver circuit 24 comprises analog and/or digital components configured to provide a downlink audio signal 40 to processor 22 and to receive an uplink audio signal 48 and provide the uplink signal via antenna 28 to a remote device, such as a cellular telephony tower, nearby wireless device (e.g., nearby laptop, smart phone, mobile phone, Bluetooth-enabled phone, etc.).
  • Processor 22 comprises an audio processor 54 and echo canceller 42 .
  • Audio data can alternatively be provided from memory associated with device 10 , for example from a digital voice recorder, game application, audio file (e.g., .wav, .mp3, etc.) or other audio source.
  • Audio processor 54 is configured to process downlink audio signal 40 to provide an echo canceller, noise gate, filtering, non-linear processing, etc.
  • Audio processor 54 is configured to provide the processed audio signal to amplifier 50 which provides an output signal to loudspeaker 16 .
  • Sense element 25 is configured to provide an input to audio processor 54 and echo canceller 42 .
  • Audio processor 54 and echo canceller 42 can be different software applications on a single integrated circuit or may comprise separate integrated circuits (e.g., different chips, dual-core chip, etc.). Further, echo canceller 42 may be a portion of audio processor 54 .
  • Audio processor 54 is configured to linearize the audio signal provided to amplifier 50 based on sensed signal 52 . Audio processor 54 may be configured to provide negative feedback, a feed forward process, a digital signal processor, etc.
  • Echo canceller 42 is configured to provide an echo canceling process to first audio signal 44 and to attenuate or reduce acoustic coupling between loudspeaker 16 and sense element 21 using echo canceller 42 and summation circuit 46 .
  • Echo canceller 42 and summation circuit 46 are configured to provide uplink audio signal 48 which can be further processed by other processing steps (e.g., amplifying, frequency modification, filtering, etc.) prior to being sent via transceiver circuit 24 to remote electronic device.
  • a first audio signal is received from the microphone.
  • a second audio signal is received via wireless telephony from a remote device.
  • the second audio signal is provided to a loudspeaker to generate an audible signal.
  • the audible signal is received at a sense element. The audible signal may be received substantially out of phase with the audible signal when received at the microphone.
  • the audible signal received at sense element 25 could have any phase relationship to the audible signal received at microphone 21 ; provided the phase relationship was known or predetermined, processing circuit 20 may be configured to adjust the phase of the audible signal received at microphone 21 (or sense element 25 ) electronically or digitally. Alternatively, or in addition, the sense element may be positioned closer to or further from loudspeaker than the microphone, as discussed hereinabove.
  • a sensed signal is provided based on the audible signal received with the sense element.
  • the first audio signal is processed based on the sensed signal.
  • an echo canceling process can be provided, or other processing, such as filtering, amplification, frequency adjustment, linearization, non-linear processing, etc.
  • Steps 60 - 70 can be provided in a device which is further configured to operate a plurality of personal information management applications and to synchronize personal information from the applications with another remote computer (e.g., via a wired or wireless connection).
  • an exemplary method of reducing distortion in an audible signal provided by a loudspeaker is shown.
  • an audio signal is received from a remote device at a mobile computing device.
  • the audio signal is provided to a loudspeaker to produce the audible signal.
  • the audio signal is sensed with a sense element.
  • a sensed signal is provided based on the audible signal.
  • the audio signal provided by the loudspeaker is linearized based on the sensed signal. Steps 72 - 80 can further be provided in a device which is also configured to operate a plurality of personal information management applications and synchronize personal information from the applications with another computer.
  • references in the claims to processing a signal or “the” signal should be understood to also encompass processing a signal derived from the signal or otherwise downstream of the processing of the signal. Further, different elements or steps of the various embodiments may be combined with other elements or steps of the various embodiments described herein. Further, the configurations disclosed herein may be used in applications to address audio processing problems other than those disclosed herein.
  • loudspeaker 16 is substantially non-linear, thereby making its behavior unpredictable.
  • the systems and methods described hereinabove can be provided to sense or detect this non-linearity and compensate or adjust for the non-linearity using one or more of the processing circuits disclosed herein.
  • One result can be reduced distortion in the audio provided by loudspeaker 16 .
  • sensed signals from sense element 25 can be processed by processor 22 in manners other than those shown above to address other methods of improving loudspeaker functionality. Accordingly, the present invention is not limited to a particular embodiment, but extends to various modifications that nevertheless fall within the scope of the appended claims.

Abstract

An electronic device comprises a microphone, a transceiver circuit, a loudspeaker, a sense element and a processing circuit. The microphone is configured to receive a first audio signal. The transceiver circuit is configured to communicate the first audio signal to a remote device and to receive a second audio signal from the remote device. The loudspeaker is configured to provide an audible signal based on the second audio signal. The sense element is configured to sense the audible signal provided by the loudspeaker. The sense element may be positioned at a distance from the loudspeaker different than a distance between the microphone and the loudspeaker. The processing circuit is configured to process at least one of the first audio signal and the second audio signal based on a sensed signal from the sense element.

Description

    BACKGROUND
  • Some electronic devices have speakerphone capabilities by use of a microphone and loudspeaker. Speakerphones require some form of acoustic separation between the microphone and loudspeaker to prevent echo and other interference with the microphone. Also, speakerphones suffer from nonlinearities in their audio output caused by such things as distortion and non-linear frequency response.
  • Conventional methods use a prediction of acoustical feedback from a loudspeaker to a speakerphone microphone based on a linear assumption of the loudspeaker's output. Some methods attempt to use echo cancellers, non-linear processing, or noise gates.
  • Further, some high-fidelity loudspeakers use sense elements and feedback to linearize their outputs.
  • However, there is a need for an improved system and method for attenuating or eliminating acoustical feedback from a loudspeaker to a microphone. Further, there is a need for an improved system and method for reducing nonlinearities in a speakerphone system. Further still, there is a need to move from a half duplex speakerphone system closer to a full duplex speakerphone system.
  • The teachings herein extend to those embodiments which are within the scope of the appended claims, regardless of whether they accomplish one or more of the above-mentioned needs.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a front view of a mobile computing device, according to an exemplary embodiment;
  • FIG. 2 is a back view of the mobile computing device of FIG. 1, according to an exemplary embodiment;
  • FIG. 3 is a block diagram of the mobile computing device of FIG. 1, according to an exemplary embodiment;
  • FIG. 4 is a block diagram of a system for reducing acoustical feedback from a loudspeaker to a microphone, according to an exemplary embodiment;
  • FIG. 5 is a block diagram of a system for reducing distortion in an audible signal provided by a loudspeaker, according to an exemplary embodiment;
  • FIG. 6 is a block diagram of a system for improved loudspeaker functionality, according to an exemplary embodiment;
  • FIG. 7 is a flowchart showing a method for reducing acoustical feedback from a loudspeaker to a microphone, according to an exemplary embodiment; and
  • FIG. 8 is a flowchart showing a method for reducing distortion in an audible signal provided by a loudspeaker, according to an exemplary embodiment.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • Referring first to FIG. 1, a mobile computing device 10 is shown. Device 10 is a smart phone, which is a combination mobile telephone and handheld computer having personal digital assistant functionality. The teachings herein can be applied to other mobile computing devices (e.g., a laptop computer) or other electronic devices (e.g., a desktop personal computer, home or car audio system, etc.). Personal digital assistant functionality can comprise one or more of personal information management, database functions, word processing, spreadsheets, voice memo recording, etc. A smart phone is configured to synchronize personal information from one or more applications with a remote computer (e.g., desktop, laptop, server, etc.). Device 10 is further configured to receive and operate additional applications provided to device 10 after manufacture, e.g., via wired or wireless download, memory card, etc.
  • Device 10 comprises a display 12 and a user input device 14 (e.g., a QWERTY keyboard, buttons, touch screen, speech recognition engine, etc.). Device 10 also comprises a speaker 15 (e.g., an earpiece speaker). Speaker 15 may be a speaker configured to provide audio output with a volume suitable for a user placing speaker 15 against or near the ear. Speaker 15 may be a part of an electrodynamic receiver, such as part number 419523 manufactured by Foster Electric Co., Ltd., Japan. Speaker 15 may be positioned above display 12 or in another location on device 10. Device 10 comprises a housing 11 having a front side 13 and a back side 17 (FIG. 2). Speaker 15 may be positioned on the front side 13 along with display 12 and user input device 14, and a loudspeaker 16 (or other speaker or transducer) may be positioned on the back side along with a battery compartment 19. Positioning loudspeaker 16 on back side 17 may be advantageous when using a directional sense element 21 on front side 13.
  • Device 10 further comprises a sense element 21 (e.g., a microphone, such as a surface mount or other microphone, or other acoustic sense element) coupled to a bottom edge 23 of housing 11. Device 10 further comprises a sense element 25 (e.g., a feedback sense element, which may also be a microphone or other acoustic sense element, such as an infrared sensor, which may use Doppler interferometry) configured to sense an audible signal provided by loudspeaker 16. In alternative embodiments, display 12, user input device 14, speaker 15, loudspeaker 16, and sense elements 21, 25 may each be positioned anywhere on front side 13, back side 17 or the edges therebetween.
  • Loudspeaker 16 is an electro-acoustic transducer that converts electrical signals into sounds loud enough to be heard at a distance. Loudspeaker 16 can be used for a speakerphone functionality. While loudspeaker 16 may be configured to produce audio output at a plurality of different volumes, it is typically configured to produce audio output at a volume suitable for a user to comfortably hear at some distance from the speaker, such as a few inches to a few feet away. Loudspeaker 16 may be an electrodynamic loudspeaker, such as part number HDR 9164, manufactured by Hosiden Corporation, Okasa, Japan.
  • Referring now to FIG. 3, device 10 comprises a processing circuit 20 comprising a processor 22. Processing circuit 20 can comprise one or more microprocessors, microcontrollers, and other analog and/or digital circuit components configured to perform the functions described herein. Processing circuit 20 comprises memory (e.g., random access memory, read only memory, flash, etc.) configured to store software applications provided during manufacture or subsequent to manufacture by the user or by a distributor of device 10. In one embodiment, processor 22 can comprise a first, applications microprocessor configured to run a variety of personal information management applications, such as calendar, contacts, e-mail, etc., and a second, radio processor on a separate chip (or as part of a dual-core chip with the application processor). The radio processor is configured to operate telephony and/or data communication functionality. Device 10 can be configured to use the radio processor for cellular radio telephone communication, such as Code Division Multiple Access (CDMA), Global System for Mobile Communications (GSM), Third Generation (3G) systems such as Wide-Band CDMA (WCDMA), or other cellular radio telephone technologies. Device 10 can further be configured to use the radio processor for data communication functionality, for example, via GSM with General Packet Radio Service (GPRS) systems (GSM/GPRS), CDMA/1XRTT systems, Enhanced Data Rates for Global Evolution (EDGE) systems, Evolution Data Only or Evolution Data Optimized (EV-DO), and/or other data communication technologies.
  • Device 10 comprises a transceiver circuit 24 which comprises analog and/or digital electrical components configured to receive and transmit wireless signals via antenna 28 to provide cellular telephone and/or data communications with a fixed wireless access point, such as a cellular telephone tower, in conjunction with a network carrier, such as, Verizon Wireless, Sprint, etc. Device 10 can further comprise circuitry to provide communication over a wide area network, such as WiMax, a local area network, such as Ethernet or according to an IEEE 802.11x standard or a personal area network, such as a Bluetooth or infrared communication technology.
  • Display 12 can comprise a touch screen display in order to provide user input to processor 22 to control functions, such as to dial a telephone number, enable/disable speakerphone audio, provide user inputs regarding increasing or decreasing the volume of audio provided through speaker 15 and/or loudspeaker 16, etc. Alternatively or in addition, user input device 14 (which can comprise one or more buttons, switches, dials, a track ball, a four-way or five-way switch, etc.) can provide similar inputs as those of touch screen display 12. Device 10 can further comprise a stylus 30 to assist the user in making selections on display 12. Processor 22 can further be configured to provide video conferencing capabilities by displaying on display 12 video from a remote participant to a video conference, by providing a video camera on device 10 for providing images to the remote participant, by providing text messaging, two-way audio streaming in full- and/or half-duplex mode, etc.
  • Sense element 21 is configured to receive audio signals, such as voice signals, from a user or other person in the vicinity of device 10, typically by way of spoken words. Sense element 21 is configured as an electro-acoustic sense element to provide audio signals from the vicinity of device 10 and to convert them to an electrical signal to provide to processor 22. Processor 22 can provide a digital voice recorder function, wireless telephone function, push-to-talk function, etc. with audible words spoken into sense element 21. Processor 22 may also provide speech recognition and/or voice control of features operable on device 10 with audible words spoken into sense element 21.
  • Referring again to FIG. 3, an speaker driver circuit 32 and a loudspeaker driver circuit 34 are provided, which may comprise analog and/or digital circuitry configured to receive audio data from processor 22 and to provide filtering, signal processing, equalizer functions, or other audio signal processing steps to audio data. For example, the incoming audio data can comprise one or more of a downlink signal received by transceiver circuit 24 from a remote participant to a telephone call or a video conference, prerecorded audio, or audio from a game or audio file stored on device 10, etc. Drivers 32, 34 may then provide the audio data to speaker 15 and/or loudspeaker 16 to provide the audio to a user or another person in the vicinity of device 10. Drivers 32, 34 may be part no. TPA6203A1, manufactured by Texas Instruments Inc., Dallas, Tex.
  • Referring now to FIG. 4, a block diagram showing an exemplary system for reducing acoustic feedback is shown. Transceiver circuit 24 is configured to receive an audio signal from a remote computing device, which can be a downlink audio signal 40. Downlink audio signal 40 can be provided by processing circuit to loudspeaker 16. Loudspeaker 16 is configured to provide an audible signal based on the downlink audio signal. Sense element 25 is configured to sense the audible provided by the loudspeaker 16. Sense element 25 is positioned at a distance from loudspeaker 16 different than a distance between sense element 21 and loudspeaker 16. The distance may be greater or less than the distance between sense element 21 and loudspeaker 16.
  • Sense element 25 is configured to provide a sensed signal to processing circuit 20 which comprises an audio echo canceller 42. Echo canceller 42 can comprise analog and/or digital electronics configured to provide an echo cancellation process to a first audio signal 44 provided by sense element 21. Echo canceller 42 can comprise computer instructions stored on the computer-readable medium, such as a microprocessor, digital signal processor, etc. Echo canceller 42 can further provide a suitable delay in echo cancellation output signals provided by processor 22 to summation circuit 46. The delay can be predetermined based on the acoustical characteristics of the acoustic path between loudspeaker 16 and sense element 25 and the acoustic characteristics of the acoustic path between loudspeaker 16 and sense element 21. Summation circuit 46 is configured to process first audio signal 44 by, for example, attenuating or reducing that portion of first audio signal 44 comprising feedback or other audio signals from loudspeaker 16. The processed signal is provided by summation circuit 46 as an output signal (e.g., an uplink audio signal 48) for uplink to transceiver circuit 24 for a wireless telephony communication. The functions of echo canceller 42 and summation circuit 46 can be combined in a single algorithm or processor, or may be provided by separate circuit components.
  • Sense element 25 may be positioned at any of a plurality of locations on or coupled to housing 11 or components within housing 11, such as a portion of housing, printed circuit board, display board, etc. Sense element 25 can comprise a unidirectional, omni-directional, or other type of microphone. Sense element 25 can be closer to loudspeaker 16 than sense element 21, and may be positioned within 1 centimeter, 2 centimeters, or within 5 centimeters, or more of loudspeaker 16. Sense element 21 can be positioned more than 3 centimeters from loudspeaker 16. Sense element 21 may be closer than 3 centimeters to loudspeaker 16 when sense element 21 is disposed on or near an opposite side of housing from loudspeaker 16. In one embodiment, the further sense element 25 is positioned from loudspeaker 16, the less time processing circuit 20 will have to calculate corrections, such as echo cancellation. In other embodiments, sense element 25 may be positioned further from loudspeaker 16 than sense element 21.
  • In another embodiment, sense elements 21 and 25 are at least 1 centimeter apart from one another, 2 centimeters apart, or 5 centimeters or more apart, regardless of where they are positioned on or around device 10.
  • According to one advantageous aspect, sense elements 21 and 25 are not the same model sense elements. For example, sense elements 21 and 25 can have at least one different characteristic or a plurality of different characteristics, wherein the characteristics can comprise frequency response, self noise or equivalent noise, maximum sound pressure level (SPL), clipping level, dynamic range, and sensitivity.
  • Sense elements 21 and 25 can further be configured to receive an audible signal from loudspeaker 16 out of phase with each other, wherein sense element 21 is configured to receive the audible signals in the first phase, sense element 25 is configured to receive audible signals in the second phase, and the first phase is different than the second phase. Phases can further be substantially different. For echo cancellation, phase can be changed by inverting the signal received by one of sense elements 21, 25 if sense elements 21, 25 are roughly equidistant from loudspeaker 16. If sense elements 21, 25 are at different distances from loudspeaker 16, the difference in phase angle may increase with increasing frequency. Processor 20 can be configured to correct for these phase differences as part of the echo canceling process.
  • In one exemplary embodiment, sense elements 21 and 25 have a known phase relationship with signals received from loudspeaker 16, and have other known acoustical characteristics at the time of manufacture of device 10. According to a further advantage, in one embodiment, sense element 25 and processing circuit 20 need not be tuned to the resonant frequency of loudspeaker 16.
  • Referring now to FIG. 5, an exemplary system for distortion reduction, linearization, or other processing of an audible signal provided by a loudspeaker will be described. In this embodiment, downlink audio signal 40 is provided to processor 22. Processor 22 can comprise digital and/or analog circuit components and/or software instructions configured to process downlink audio signal 40 for playing via loudspeaker 16. Processor 22 can comprise a digital signal processor, a negative feedback circuit, a feed forward circuit, etc. Processor 22 can further comprise echo cancellation, filtering, or other processing functions. Processor 22 is configured to provide the processed audio signal to an amplifier 50 which can be a variable amplifier configured to be controlled by user input or an application to adjust volume. Amplifier 50 is configured to provide the amplified signal to loudspeaker 16 to provide an audible signal. Sense element 25 is configured to sense the audible signal provided by loudspeaker 16 and to provide a sensed signal 52 to processor 22. Processor 22 is configured to process the downlink audio signal 40 based on sensed signal 52. For example, processor 22 may be configured to linearize the audible signal output by loudspeaker 16 based on sensed signal 52. Linearization can be used to reduce nonlinearities in the output of loudspeaker 16. Linearization may comprise taking any characteristic with curves or lumps in it and providing a flatter, more linear output characteristic. For distortion or sensitivity, a curve of distortion versus amplitude may have a pronounced shoulder region (e.g. due to mechanical or magnetic non-linearity). For frequency response, variations in sensitivity versus frequency (e.g. due to various resonances) can be present. Linearization may also refer to linearizing the speaker in terms of flat frequency response. Processor 22 may be configured to reduce, attenuate, or eliminate any of these or other types of nonlinearities. A feedback-type linearization scheme may use adaptive and/or predictive algorithms to provide complementary pre-distortion to or compression of the output signal. By linearizing the output of loudspeaker 16, distortion can be reduced.
  • Processor 22 can further be configured to provide echo cancellation, non-linear processing, noise gating, etc.
  • Referring now to FIG. 6, an exemplary system is shown providing a sense element 25 which may be used to reduce or attenuate acoustical feedback from loudspeaker 16 to sense element 21 and may further be used to reduce distortion in an audible signal provided by loudspeaker 16. In this embodiment, antenna 28 is configured to receive a wireless telephony signal (or other wireless signal) and to provide a signal comprising audio data (which may be a teleconference, video conference, etc.) to transceiver circuit 24. Transceiver circuit 24 comprises analog and/or digital components configured to provide a downlink audio signal 40 to processor 22 and to receive an uplink audio signal 48 and provide the uplink signal via antenna 28 to a remote device, such as a cellular telephony tower, nearby wireless device (e.g., nearby laptop, smart phone, mobile phone, Bluetooth-enabled phone, etc.). Processor 22 comprises an audio processor 54 and echo canceller 42. Audio data can alternatively be provided from memory associated with device 10, for example from a digital voice recorder, game application, audio file (e.g., .wav, .mp3, etc.) or other audio source. Audio processor 54 is configured to process downlink audio signal 40 to provide an echo canceller, noise gate, filtering, non-linear processing, etc. Audio processor 54 is configured to provide the processed audio signal to amplifier 50 which provides an output signal to loudspeaker 16. Sense element 25 is configured to provide an input to audio processor 54 and echo canceller 42. Audio processor 54 and echo canceller 42 can be different software applications on a single integrated circuit or may comprise separate integrated circuits (e.g., different chips, dual-core chip, etc.). Further, echo canceller 42 may be a portion of audio processor 54. Audio processor 54 is configured to linearize the audio signal provided to amplifier 50 based on sensed signal 52. Audio processor 54 may be configured to provide negative feedback, a feed forward process, a digital signal processor, etc. Echo canceller 42 is configured to provide an echo canceling process to first audio signal 44 and to attenuate or reduce acoustic coupling between loudspeaker 16 and sense element 21 using echo canceller 42 and summation circuit 46. Echo canceller 42 and summation circuit 46 are configured to provide uplink audio signal 48 which can be further processed by other processing steps (e.g., amplifying, frequency modification, filtering, etc.) prior to being sent via transceiver circuit 24 to remote electronic device.
  • Referring now to FIG. 7, an exemplary method is shown for reducing acoustical feedback from a loudspeaker to a microphone. In step 60, a first audio signal is received from the microphone. In step 62, a second audio signal is received via wireless telephony from a remote device. At step 64, the second audio signal is provided to a loudspeaker to generate an audible signal. At step 66, the audible signal is received at a sense element. The audible signal may be received substantially out of phase with the audible signal when received at the microphone. In an alternative embodiment, the audible signal received at sense element 25 could have any phase relationship to the audible signal received at microphone 21; provided the phase relationship was known or predetermined, processing circuit 20 may be configured to adjust the phase of the audible signal received at microphone 21 (or sense element 25) electronically or digitally. Alternatively, or in addition, the sense element may be positioned closer to or further from loudspeaker than the microphone, as discussed hereinabove. At step 68, a sensed signal is provided based on the audible signal received with the sense element. At step 70, the first audio signal is processed based on the sensed signal. For example, an echo canceling process can be provided, or other processing, such as filtering, amplification, frequency adjustment, linearization, non-linear processing, etc. Steps 60-70 can be provided in a device which is further configured to operate a plurality of personal information management applications and to synchronize personal information from the applications with another remote computer (e.g., via a wired or wireless connection).
  • Referring now to FIG. 8, an exemplary method of reducing distortion in an audible signal provided by a loudspeaker is shown. At step 72, an audio signal is received from a remote device at a mobile computing device. At step 74, the audio signal is provided to a loudspeaker to produce the audible signal. At step 76, the audio signal is sensed with a sense element. At step 78, a sensed signal is provided based on the audible signal. At step 80, the audio signal provided by the loudspeaker is linearized based on the sensed signal. Steps 72-80 can further be provided in a device which is also configured to operate a plurality of personal information management applications and synchronize personal information from the applications with another computer.
  • References in the claims to processing a signal or “the” signal should be understood to also encompass processing a signal derived from the signal or otherwise downstream of the processing of the signal. Further, different elements or steps of the various embodiments may be combined with other elements or steps of the various embodiments described herein. Further, the configurations disclosed herein may be used in applications to address audio processing problems other than those disclosed herein.
  • According to one advantage, loudspeaker 16 is substantially non-linear, thereby making its behavior unpredictable. The systems and methods described hereinabove can be provided to sense or detect this non-linearity and compensate or adjust for the non-linearity using one or more of the processing circuits disclosed herein. One result can be reduced distortion in the audio provided by loudspeaker 16.
  • While the exemplary embodiments illustrated in the Figs. and described above are presently exemplary, it should be understood that these embodiments are offered by way of example only. For example, the teachings herein can apply to a home or car audio system. Also, sensed signals from sense element 25 can be processed by processor 22 in manners other than those shown above to address other methods of improving loudspeaker functionality. Accordingly, the present invention is not limited to a particular embodiment, but extends to various modifications that nevertheless fall within the scope of the appended claims.

Claims (20)

1. An electronic device, comprising:
a microphone configured to receive a first audio signal;
a transceiver circuit configured to communicate the first audio signal to a remote device and to receive a second audio signal from the remote device;
a loudspeaker configured to provide an audible signal based on the second audio signal;
a sense element configured to sense the audible signal provided by the loudspeaker, wherein the sense element is positioned at a distance from the loudspeaker different than a distance between the microphone and the loudspeaker; and
a processing circuit configured to process at least one of the first audio signal and the second audio signal based on a sensed signal from the sense element.
2. The electronic device of claim 1, wherein the processing circuit is configured to provide an echo cancellation process to the first audio signal based on the sensed signal from the sense element.
3. The electronic device of claim 1, wherein the processing circuit is configured to process the second audio signal based on the sensed signal from the sense element.
4. The electronic device of claim 3, wherein the processing circuit is configured to linearize the audible signal output by the loudspeaker based on the sensed signal from the sense element.
5. The electronic device of claim 4, wherein the processing circuit comprises at least one of a negative feedback circuit and a feed forward circuit configured to linearize the audible signal output by the loudspeaker.
6. The electronic device of claim 1, wherein the electronic device is a mobile computing device.
7. The electronic device of claim 6, wherein the electronic device is a handheld device.
8. The electronic device of claim 7, wherein the electronic device comprises a plurality of personal information management applications and the processing circuit is configured to synchronize personal information from the applications with another computer.
9. The electronic device of claim 1, wherein the sense element is positioned closer to the loudspeaker than the microphone.
10. The electronic device of claim 9, wherein the sense element is positioned within approximately 2 centimeters of the loudspeaker.
11. The electronic device of claim 1, wherein the microphone is configured to receive the audible signal with a first phase and the sense element is configured to receive the audible signal with a second phase different than the first phase.
12. The electronic device of claim 1, wherein the microphone and sense element have a plurality of substantially different characteristics.
13. A method of reducing acoustical feedback from a loudspeaker to a microphone, comprising:
receiving a first audio signal from a microphone;
receiving a second audio signal via wireless telephony from a remote device;
providing the second audio signal to a loudspeaker to generate an audible signal;
receiving the audible signal at a sense element substantially out of phase with the microphone;
providing a sensed signal based on the audible signal received at the sense element; and
processing the first audio signal based on the sensed signal.
14. The method of claim 1, further comprising receiving the audible signal at a sense element positioned closer to the loudspeaker than the microphone.
15. The method of claim 1, wherein the step of processing comprises providing an echo canceling process to the first audio signal based on the sensed signal.
16. The method of claim 1, further comprising:
operating a plurality of personal information management applications; and
synchronizing personal information from the applications with another computer.
17. A method of reducing distortion in an audible signal provided by a loudspeaker, comprising:
receiving an audio signal from a remote device at a mobile computing device;
providing the audio signal to a loudspeaker to produce the audible signal;
sensing the audible signal with a sense element;
provide a sensed signal based on the audible signal; and
linearizing the audio signal based on the sensed signal.
18. The method of claim 17, wherein the step of linearizing comprises providing at least one of a negative feedback circuit and a feed forward circuit.
19. The method of claim 17, wherein the step of linearizing is provided by a digital signal processor.
20. The method of claim 17, further comprising:
operating a plurality of personal information management applications; and
synchronizing personal information from the applications with another computer.
US11/634,817 2006-12-05 2006-12-05 System and method for improved loudspeaker functionality Active 2031-08-12 US8311590B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/634,817 US8311590B2 (en) 2006-12-05 2006-12-05 System and method for improved loudspeaker functionality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/634,817 US8311590B2 (en) 2006-12-05 2006-12-05 System and method for improved loudspeaker functionality

Publications (2)

Publication Number Publication Date
US20080132295A1 true US20080132295A1 (en) 2008-06-05
US8311590B2 US8311590B2 (en) 2012-11-13

Family

ID=39476445

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/634,817 Active 2031-08-12 US8311590B2 (en) 2006-12-05 2006-12-05 System and method for improved loudspeaker functionality

Country Status (1)

Country Link
US (1) US8311590B2 (en)

Cited By (82)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100080379A1 (en) * 2008-09-30 2010-04-01 Shaohai Chen Intelligibility boost
WO2016186997A1 (en) * 2015-05-15 2016-11-24 Harman International Industries, Inc. Acoustic echo cancelling system and method
US20170061982A1 (en) * 2014-02-18 2017-03-02 Dolby International Ab Device and Method for Tuning a Frequency-Dependent Attenuation Stage
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10354652B2 (en) 2015-12-02 2019-07-16 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10417344B2 (en) 2014-05-30 2019-09-17 Apple Inc. Exemplar-based natural language processing
US10417405B2 (en) 2011-03-21 2019-09-17 Apple Inc. Device access using voice authentication
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10438595B2 (en) 2014-09-30 2019-10-08 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10453443B2 (en) 2014-09-30 2019-10-22 Apple Inc. Providing an indication of the suitability of speech recognition
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US10529332B2 (en) 2015-03-08 2020-01-07 Apple Inc. Virtual assistant activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10580409B2 (en) 2016-06-11 2020-03-03 Apple Inc. Application integration with a digital assistant
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10643611B2 (en) 2008-10-02 2020-05-05 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10684703B2 (en) 2018-06-01 2020-06-16 Apple Inc. Attention aware virtual assistant dismissal
US10699717B2 (en) 2014-05-30 2020-06-30 Apple Inc. Intelligent assistant for home automation
US10706841B2 (en) 2010-01-18 2020-07-07 Apple Inc. Task flow identification based on user intent
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10769385B2 (en) 2013-06-09 2020-09-08 Apple Inc. System and method for inferring user intent from speech inputs
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US10904611B2 (en) 2014-06-30 2021-01-26 Apple Inc. Intelligent automated assistant for TV user interactions
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10942702B2 (en) 2016-06-11 2021-03-09 Apple Inc. Intelligent device arbitration and control
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US11023513B2 (en) 2007-12-20 2021-06-01 Apple Inc. Method and apparatus for searching using an active ontology
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11048473B2 (en) 2013-06-09 2021-06-29 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US11069336B2 (en) 2012-03-02 2021-07-20 Apple Inc. Systems and methods for name pronunciation
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US11080012B2 (en) 2009-06-05 2021-08-03 Apple Inc. Interface for a virtual digital assistant
US11127397B2 (en) 2015-05-27 2021-09-21 Apple Inc. Device voice control
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US11231904B2 (en) 2015-03-06 2022-01-25 Apple Inc. Reducing response latency of intelligent automated assistants
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
US11314370B2 (en) 2013-12-06 2022-04-26 Apple Inc. Method for extracting salient dialog usage from live data
US11350253B2 (en) 2011-06-03 2022-05-31 Apple Inc. Active transport based notifications
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US11495218B2 (en) 2018-06-01 2022-11-08 Apple Inc. Virtual assistant operation in multi-device environments

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2461315B (en) * 2008-06-27 2011-09-14 Wolfson Microelectronics Plc Noise cancellation system

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4629829A (en) * 1984-12-14 1986-12-16 Motorola, Inc. Full duplex speakerphone for radio and landline telephones
US5133013A (en) * 1988-01-18 1992-07-21 British Telecommunications Public Limited Company Noise reduction by using spectral decomposition and non-linear transformation
US5172408A (en) * 1990-08-01 1992-12-15 At&T Bell Laboratories Speakerphone state-controlled alerting arrangement
US5432859A (en) * 1993-02-23 1995-07-11 Novatel Communications Ltd. Noise-reduction system
US5491747A (en) * 1992-09-30 1996-02-13 At&T Bell Corp. Noise-cancelling telephone handset
US5524058A (en) * 1994-01-12 1996-06-04 Mnc, Inc. Apparatus for performing noise cancellation in telephonic devices and headwear
US5555449A (en) * 1995-03-07 1996-09-10 Ericsson Inc. Extendible antenna and microphone for portable communication unit
US5732143A (en) * 1992-10-29 1998-03-24 Andrea Electronics Corp. Noise cancellation apparatus
US5790657A (en) * 1995-01-26 1998-08-04 Nec Corporation Echo suppressor capable of suppressing an echo resulting from acoustic coupling without spoiling a natural sound of conversation
US5937070A (en) * 1990-09-14 1999-08-10 Todter; Chris Noise cancelling systems
US5982883A (en) * 1996-11-12 1999-11-09 U.S. Philips Corporation Telephone comprising a sliding microphone
US20020106077A1 (en) * 2000-10-05 2002-08-08 Philippe Moquin Use of handset microphone to enhance speakerphone loudspeaker performance
US20030185403A1 (en) * 2000-03-07 2003-10-02 Alastair Sibbald Method of improving the audibility of sound from a louspeaker located close to an ear
US20040062388A1 (en) * 2000-12-21 2004-04-01 Macdonald Donald Lewis Audio handheld device
US20040192243A1 (en) * 2003-03-28 2004-09-30 Siegel Jaime A. Method and apparatus for reducing noise from a mobile telephone and for protecting the privacy of a mobile telephone user
US20040214614A1 (en) * 2001-08-07 2004-10-28 Aman James Edward Mobile phone and hands-free kit with inductive link
US20040234084A1 (en) * 2003-05-20 2004-11-25 Peter Isberg Microphone circuits having adjustable directivity patterns for reducing loudspeaker feedback and methods of operating the same
US20050026568A1 (en) * 2003-08-01 2005-02-03 Hawker Larry E. System and method of acoustically safe automatic handsfree volume adjustment
US6957089B2 (en) * 2001-05-31 2005-10-18 Coby Electronics Corporation Compact hands-free adapter for use with a cellular telephone
US6978010B1 (en) * 2002-03-21 2005-12-20 Bellsouth Intellectual Property Corp. Ambient noise cancellation for voice communication device
US7031460B1 (en) * 1998-10-13 2006-04-18 Lucent Technologies Inc. Telephonic handset employing feed-forward noise cancellation
US20060140428A1 (en) * 2004-12-29 2006-06-29 Research In Motion Limited Mobile wireless communications device with slidable configuration providing hearing aid compatibility features and related methods
US20060188089A1 (en) * 2005-02-18 2006-08-24 Diethorn Eric J Reduction in acoustic coupling in communication systems and appliances using multiple microphones
US20070036342A1 (en) * 2005-08-05 2007-02-15 Boillot Marc A Method and system for operation of a voice activity detector
US20070223736A1 (en) * 2006-03-24 2007-09-27 Stenmark Fredrik M Adaptive speaker equalization

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6438240B1 (en) 1997-02-18 2002-08-20 Mitel Corporation Circuit to improve transducer separation in handsfree telephone

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4629829A (en) * 1984-12-14 1986-12-16 Motorola, Inc. Full duplex speakerphone for radio and landline telephones
US5133013A (en) * 1988-01-18 1992-07-21 British Telecommunications Public Limited Company Noise reduction by using spectral decomposition and non-linear transformation
US5172408A (en) * 1990-08-01 1992-12-15 At&T Bell Laboratories Speakerphone state-controlled alerting arrangement
US5937070A (en) * 1990-09-14 1999-08-10 Todter; Chris Noise cancelling systems
US5491747A (en) * 1992-09-30 1996-02-13 At&T Bell Corp. Noise-cancelling telephone handset
US5732143A (en) * 1992-10-29 1998-03-24 Andrea Electronics Corp. Noise cancellation apparatus
US5432859A (en) * 1993-02-23 1995-07-11 Novatel Communications Ltd. Noise-reduction system
US5524058A (en) * 1994-01-12 1996-06-04 Mnc, Inc. Apparatus for performing noise cancellation in telephonic devices and headwear
US5790657A (en) * 1995-01-26 1998-08-04 Nec Corporation Echo suppressor capable of suppressing an echo resulting from acoustic coupling without spoiling a natural sound of conversation
US5555449A (en) * 1995-03-07 1996-09-10 Ericsson Inc. Extendible antenna and microphone for portable communication unit
US5982883A (en) * 1996-11-12 1999-11-09 U.S. Philips Corporation Telephone comprising a sliding microphone
US7031460B1 (en) * 1998-10-13 2006-04-18 Lucent Technologies Inc. Telephonic handset employing feed-forward noise cancellation
US20030185403A1 (en) * 2000-03-07 2003-10-02 Alastair Sibbald Method of improving the audibility of sound from a louspeaker located close to an ear
US20020106077A1 (en) * 2000-10-05 2002-08-08 Philippe Moquin Use of handset microphone to enhance speakerphone loudspeaker performance
US20040062388A1 (en) * 2000-12-21 2004-04-01 Macdonald Donald Lewis Audio handheld device
US6957089B2 (en) * 2001-05-31 2005-10-18 Coby Electronics Corporation Compact hands-free adapter for use with a cellular telephone
US20040214614A1 (en) * 2001-08-07 2004-10-28 Aman James Edward Mobile phone and hands-free kit with inductive link
US6978010B1 (en) * 2002-03-21 2005-12-20 Bellsouth Intellectual Property Corp. Ambient noise cancellation for voice communication device
US20040192243A1 (en) * 2003-03-28 2004-09-30 Siegel Jaime A. Method and apparatus for reducing noise from a mobile telephone and for protecting the privacy of a mobile telephone user
US20040234084A1 (en) * 2003-05-20 2004-11-25 Peter Isberg Microphone circuits having adjustable directivity patterns for reducing loudspeaker feedback and methods of operating the same
US20050026568A1 (en) * 2003-08-01 2005-02-03 Hawker Larry E. System and method of acoustically safe automatic handsfree volume adjustment
US20060140428A1 (en) * 2004-12-29 2006-06-29 Research In Motion Limited Mobile wireless communications device with slidable configuration providing hearing aid compatibility features and related methods
US20060188089A1 (en) * 2005-02-18 2006-08-24 Diethorn Eric J Reduction in acoustic coupling in communication systems and appliances using multiple microphones
US20070036342A1 (en) * 2005-08-05 2007-02-15 Boillot Marc A Method and system for operation of a voice activity detector
US20070223736A1 (en) * 2006-03-24 2007-09-27 Stenmark Fredrik M Adaptive speaker equalization

Cited By (101)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11023513B2 (en) 2007-12-20 2021-06-01 Apple Inc. Method and apparatus for searching using an active ontology
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US20100080379A1 (en) * 2008-09-30 2010-04-01 Shaohai Chen Intelligibility boost
US10643611B2 (en) 2008-10-02 2020-05-05 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US11348582B2 (en) 2008-10-02 2022-05-31 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US11080012B2 (en) 2009-06-05 2021-08-03 Apple Inc. Interface for a virtual digital assistant
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US10706841B2 (en) 2010-01-18 2020-07-07 Apple Inc. Task flow identification based on user intent
US10692504B2 (en) 2010-02-25 2020-06-23 Apple Inc. User profiling for voice input processing
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US10417405B2 (en) 2011-03-21 2019-09-17 Apple Inc. Device access using voice authentication
US11350253B2 (en) 2011-06-03 2022-05-31 Apple Inc. Active transport based notifications
US11069336B2 (en) 2012-03-02 2021-07-20 Apple Inc. Systems and methods for name pronunciation
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10769385B2 (en) 2013-06-09 2020-09-08 Apple Inc. System and method for inferring user intent from speech inputs
US11048473B2 (en) 2013-06-09 2021-06-29 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US11314370B2 (en) 2013-12-06 2022-04-26 Apple Inc. Method for extracting salient dialog usage from live data
US10283137B2 (en) * 2014-02-18 2019-05-07 Dolby Laboratories Licensing Corporation Device and method for tuning a frequency-dependent attenuation stage
US20170061982A1 (en) * 2014-02-18 2017-03-02 Dolby International Ab Device and Method for Tuning a Frequency-Dependent Attenuation Stage
US10417344B2 (en) 2014-05-30 2019-09-17 Apple Inc. Exemplar-based natural language processing
US10714095B2 (en) 2014-05-30 2020-07-14 Apple Inc. Intelligent assistant for home automation
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US10657966B2 (en) 2014-05-30 2020-05-19 Apple Inc. Better resolution when referencing to concepts
US10699717B2 (en) 2014-05-30 2020-06-30 Apple Inc. Intelligent assistant for home automation
US10904611B2 (en) 2014-06-30 2021-01-26 Apple Inc. Intelligent automated assistant for TV user interactions
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10453443B2 (en) 2014-09-30 2019-10-22 Apple Inc. Providing an indication of the suitability of speech recognition
US10438595B2 (en) 2014-09-30 2019-10-08 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10390213B2 (en) 2014-09-30 2019-08-20 Apple Inc. Social reminders
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US11231904B2 (en) 2015-03-06 2022-01-25 Apple Inc. Reducing response latency of intelligent automated assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10529332B2 (en) 2015-03-08 2020-01-07 Apple Inc. Virtual assistant activation
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
WO2016186997A1 (en) * 2015-05-15 2016-11-24 Harman International Industries, Inc. Acoustic echo cancelling system and method
CN107636758A (en) * 2015-05-15 2018-01-26 哈曼国际工业有限公司 Acoustic echo eliminates system and method
US20180130482A1 (en) * 2015-05-15 2018-05-10 Harman International Industries, Incorporated Acoustic echo cancelling system and method
US11127397B2 (en) 2015-05-27 2021-09-21 Apple Inc. Device voice control
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10354652B2 (en) 2015-12-02 2019-07-16 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US10580409B2 (en) 2016-06-11 2020-03-03 Apple Inc. Application integration with a digital assistant
US10942702B2 (en) 2016-06-11 2021-03-09 Apple Inc. Intelligent device arbitration and control
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10553215B2 (en) 2016-09-23 2020-02-04 Apple Inc. Intelligent automated assistant
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10847142B2 (en) 2017-05-11 2020-11-24 Apple Inc. Maintaining privacy of personal information
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US10684703B2 (en) 2018-06-01 2020-06-16 Apple Inc. Attention aware virtual assistant dismissal
US10984798B2 (en) 2018-06-01 2021-04-20 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US11009970B2 (en) 2018-06-01 2021-05-18 Apple Inc. Attention aware virtual assistant dismissal
US11495218B2 (en) 2018-06-01 2022-11-08 Apple Inc. Virtual assistant operation in multi-device environments
US10504518B1 (en) 2018-06-03 2019-12-10 Apple Inc. Accelerated task performance
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US10944859B2 (en) 2018-06-03 2021-03-09 Apple Inc. Accelerated task performance

Also Published As

Publication number Publication date
US8311590B2 (en) 2012-11-13

Similar Documents

Publication Publication Date Title
US8311590B2 (en) System and method for improved loudspeaker functionality
US7925307B2 (en) Audio output using multiple speakers
EP3518555B1 (en) Valve for acoustic port
US8744091B2 (en) Intelligibility control using ambient noise detection
US8155330B2 (en) Dynamic audio parameter adjustment using touch sensing
US8095073B2 (en) Method and apparatus for improved mobile station and hearing aid compatibility
CN100559471C (en) Multi-mode audio processors and method of operating thereof
US10341759B2 (en) System and method of wind and noise reduction for a headphone
US8774399B2 (en) System for reducing speakerphone echo
EP2449754B1 (en) Apparatus, method and computer program for controlling an acoustic signal
US8811602B2 (en) Full duplex speakerphone design using acoustically compensated speaker distortion
EP2208364B1 (en) Noise cancellation circuit for electronic device
US20090253418A1 (en) System for conference call and corresponding devices, method and program products
EP1385324A1 (en) A system and method for reducing the effect of background noise
US20140135078A1 (en) Dynamic Speaker Management with Echo Cancellation
US9084063B2 (en) Hearing aid compatible audio device with acoustic noise cancellation
US20100080379A1 (en) Intelligibility boost
US20140254832A1 (en) Volume adjusting system and method
US20120257761A1 (en) Apparatus and method for auto adjustment of volume in a portable terminal
US20160080864A1 (en) Audio System and Method
US8351597B2 (en) Electronic device, echo canceling method thereof, non-transitory computer readable medium, circuit substrate, and portable telephone terminal device
CN106657621B (en) Self-adaptive adjusting device and method for sound signal
US20230058981A1 (en) Conference terminal and echo cancellation method for conference
US20110129102A1 (en) Method and apparatus for controlling sound volume in mobile communication terminal
KR20120115941A (en) Apparatus and method for auto adjustment of volume in a portable terminal

Legal Events

Date Code Title Description
AS Assignment

Owner name: PALM, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HOROWITZ, RONALD J.;REEL/FRAME:018932/0668

Effective date: 20070222

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNOR:PALM, INC.;REEL/FRAME:020319/0568

Effective date: 20071024

Owner name: JPMORGAN CHASE BANK, N.A.,NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNOR:PALM, INC.;REEL/FRAME:020319/0568

Effective date: 20071024

AS Assignment

Owner name: PALM, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:024630/0474

Effective date: 20100701

AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PALM, INC.;REEL/FRAME:025204/0809

Effective date: 20101027

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
AS Assignment

Owner name: PALM, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:030341/0459

Effective date: 20130430

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: PALM, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:031837/0544

Effective date: 20131218

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PALM, INC.;REEL/FRAME:031837/0239

Effective date: 20131218

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PALM, INC.;REEL/FRAME:031837/0659

Effective date: 20131218

AS Assignment

Owner name: QUALCOMM INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HEWLETT-PACKARD COMPANY;HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;PALM, INC.;REEL/FRAME:032132/0001

Effective date: 20140123

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12