WO1999035637A1 - Voice activated switch method and apparatus - Google Patents

Voice activated switch method and apparatus Download PDF

Info

Publication number
WO1999035637A1
WO1999035637A1 PCT/US1998/027648 US9827648W WO9935637A1 WO 1999035637 A1 WO1999035637 A1 WO 1999035637A1 US 9827648 W US9827648 W US 9827648W WO 9935637 A1 WO9935637 A1 WO 9935637A1
Authority
WO
WIPO (PCT)
Prior art keywords
circuit
speech
signal
command
microcontroller
Prior art date
Application number
PCT/US1998/027648
Other languages
French (fr)
Inventor
Richard Matulich
Allan Ligi
Original Assignee
Richard Matulich
Allan Ligi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Richard Matulich, Allan Ligi filed Critical Richard Matulich
Priority to AU20960/99A priority Critical patent/AU2096099A/en
Publication of WO1999035637A1 publication Critical patent/WO1999035637A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/115Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings
    • H05B47/12Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings by detecting audible sound
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Definitions

  • Figure 5 is a side view of a wall switch of a preferred embodiment
  • Figure 6 is a block diagram of a speech activated wall switch
  • Figure 7a is a flowchart of a wall switch of a preferred embodiment
  • Figure 7b is a continuation of the flowchart of Figure 7a for a default mode of a preferred embodiment
  • Figure 7c is a continuation of the flowchart of Figure 7a for a first user independent mode
  • Figure 7d is a continuation of the flowchart of Figure 7a for a second user independent mode
  • Figure 7e is a continuation of the flowchart of Figure 7a for a third user independent mode
  • Figure 7f is a continuation of the flowchart of Figure 7a for a speaker dependent mode
  • Figure 7g is a continuation of the flowchart of Figure 7f;
  • the phrase "automatic gain control" usually refers to a feedback loop that accepts a varying input signal and uses feedback to maintain a constant output signal.
  • the automatic gain control circuit 8 of the preferred embodiment operates in a different manner by supplying a continuous ambient level signal to the microcontroller 20 over a predetermined time window.
  • the microcontroller 20 maintains the ambient level during the time window by sending feedback signals 36, 38 to the automatic gain control circuit 8.
  • the ambient level is used as a starting level for recognizing speech. Any background noise received by the device during the time window that is below the ambient level is ignored. After the time window has expired, the device establishes a new ambient level signal.
  • Figures 3b and 3c are continuations of the program flowchart in Figure 3a.
  • Figure 3b represents the program for a simple "ON” or “OFF” application
  • Figure 3c describes the program flow for a light dimmer application.
  • a control signal 44 is sent to the output control circuit 1 6 to close the switch 1 50.
  • the closed switch connects standard utility AC to a utility plug.
  • the program returns to the default block 1 04, turns off the indicating device, and waits for a new sequence of speech commands.
  • Figure 7g is a continuation of the user dependent programming mode. Once a first word has been successfully recorded, the LEDs are flashed to indicate that the device is waiting for a second word 834. The green LED 208 is illuminated to indicate that the unit is ready to record the second word 836. If a silence period is recorded three times 838, then the program branches to detect a valid command word. If a valid word pattern is recorded 840, the red LED 204 flashes once 842. If the user has not repeated the word 844, the word is recorded in a temporary memory 852, and the program branches to the beginning of the sequence to record a second word 834.
  • the flowchart of Figure 7h illustrates a minimal example of recognizing a single user dependent word.
  • the other embodiments of the program may require a sequence of words before the power is toggled 868.
  • the other embodiments of the program may also respond according to a particular command such as "DIM " or "ON” or “OFF. "
  • the flowchart is presented for illustrative purposes and is not meant to limit the breadth of the microcontroller program. It is evident that there are additional embodiments which are not illustrated above but which are clearly within the scope and spirit of the present invention. The above description and drawings are therefore intended to be exemplary only and the scope of the invention is to be limited solely by the appended claims.

Abstract

A voice activated device for producing control signals in response to speech is self-contained and requires no additional software or hardware. The device may be incorporated into a housing that replaces a wall switch that is connected to an AC circuit. An alternate housing is portable and includes a jack that plugs into and lies flush against a standard AC utility outlet, and at least one plug for accepting an AC jack of any electronic product or appliance. The device acts as a control interface between utility power and connected electrical devices by connecting or disconnecting power to the electrical devices based on speech commands.

Description

VOICE ACTIVATED SWITCH METHOD AND APPARATUS
This is a continuation-in-part of co-pending application Serial No. 09/002,436, filed January 2, 1 998.
Field of the Invention
This invention relates generally to voice activated devices for producing control signals, and more specifically to a voice activated switch for producing control signals to switch on or switch off AC electrical devices.
Background of the Invention
The use of speech recognition technology is becoming a viable means to control one's environment. As the sophistication of speech- activated technology increases and the cost of the associated hardware and software decreases, the use of speech-controlled devices will be commonplace. Applications for speech recognition technology are numerous and include the control of appliances, consumer electronics, toys, and tools. Products and services employing speech recognition are developing rapidly and are continuously applied to new markets.
The use of speech recognition is ideal wherever the hands and/or the eyes are busy. Speech commands are a quick, hands-free way to control electrical devices. The dangers associated with walking into a dark room, or the inconveniences of interrupting tasks in order to turn on appliances or lights, are alleviated by the utilization of speech recognition technology.
Speech recognition technology has been in development for more than 25 years resulting in a variety of hardware and software tools for personal computers. In a typical application, a speech recognition circuit board and compatible software programs are inserted into a computer. These add on programs, which operate continuously in the background of the computer's operating system, are designed to accept spoken words and either execute the spoken command or convert the words into text. The disadvantage in using this approach to control individual appliances is the necessity of one or more computers. Also, it is unlikely that manufacturers will add full blown computer systems to control appliances such as washing machines or electronic products such as stereos. Computer controlled systems that utilize speech recognition have been employed to control the appliances and electronics throughout a house or building, however, these systems are expensive, complicated, and require custom installation.
Remotely controlling an electrical appliance is currently possible using devices employing a variety of technologies. Products using acoustic signals are available on the market to control electrical appliances. These devices recognize specific sounds such as claps, and respond by toggling power switches. One drawback of utilizing an acoustic device is that it does not provide "hands-free" control. Also, the user must remember an acoustic code, such as a sequence of claps, for each appliance. Another way to control an appliance is by the utilization of a remote control. Remote control units utilizing speech recognition have been designed for electronic products such as VCRs. The speaker talks into a control unit while depressing a switch, and the speech commands are recognized and transmitted to the VCR using infra-red signals. Although this system offers a means for the remote control of electronics, it does not offer a hands-free solution. Additionally, the user must have the remote control unit with him or her, and each target appliance must be adapted to receive IR signals.
Similar to any developing technology, speech recognition poses many hurdles, including designing the most effective user interface, and increasing response accuracy. A non-friendly user interface is likely to frustrate the user when non-responsiveness of the device is the only indication of a recognition error. Another difficulty involves extemporaneous conversations and sounds that may falsely trigger a device response. Speech recognition devices have attempted to overcome this problem by allowing a very limited number of speech commands such as "ON" and "OFF." However, these devices must be programmed with the voices of the speakers that will use the device, and do not anticipate noisy environments in which the device is required to distinguish between the speaker and other noises. Also, the limited vocabulary allows the utilization of one device per room, unless the speaker desires to turn on all appliances at the same time.
The current technology for the remote control of electronic consumer products fails to provide a hands-free, economical, compact, and easy-to-use device. Additionally, available designs do not offer solutions for inaccuracy due to false response, user frustration, and ambient noise interference. These problems and deficiencies are clearly felt in the art and are solved by the present invention in the manner described below.
SUMMARY OF THE INVENTION It is an advantage of the present invention to provide a compact, stand alone, speech recognition circuit to control a variety of electrical devices, including consumer electronic products or appliances without the need for a host computer.
It is another advantage to provide an easy-to-use device that is programmable to recognize a variety of command words so that more that one device can be utilized within one room. It is yet another advantage of the present invention to provide a low-cost replacement for a standard wall switch and switch box for speech control of electrical devices connected to the wall switch circuit. It is still another advantage to provide a portable speech recognition interface between a standard AC outlet and an electrical device. A further advantage of the invention is to provide a speech recognition device that incorporates user interfaces for confirming acceptance of speech commands thereby increasing recognition accuracy while reducing the necessity for training the user. In a preferred embodiment of the present invention, a stand-alone, programmable speech recognition device acts as a control interface between a 1 20 V or 230 V AC switch and a connected electrical appliance or light. In a preferred embodiment ("wall switch embodiment"), the voice activated control circuitry is designed to fit into a switch box shell that can be installed in place of a standard wall switch. In an alternate preferred embodiment ("outlet embodiment"), the voice activated control circuitry is encased in a portable, palm-sized shell that can be plugged into a standard outlet.
In a wall switch embodiment, the speech recognition circuitry of the invention is contained on a circuit board having dimensions to fit within a standard wall switch box. The circuit board has connections for user interfaces including input leads for a microphone for accepting a voice command, a manual switch controller for accepting manual operation of the switch, and at least one light-emitting diode ("LED"). The manual switch controller provides a manual means for operating the switch, and operates in cooperation with the speech recognition circuitry. A variety of technologies can be utilized for the manual switch including rocker switches, actuator-type controls, pushbuttons and touch plate technology. The preferred embodiment utilizes capacitance touch plate technology that is known in the art. The speech recognition device operates in a continuous listening mode which allows it to actively listen for sounds at all times. Ideally, the device is located in a position that is exposed and not hidden behind an object such as a piece of furniture. An exposed location allows a built-in microphone to pick up un-muffled sounds and speech in proximity to the device thereby increasing the response accuracy. The preferred wall switch embodiment is typically placed in a convenient location within a room and positioned at approximately four feet ( 1 22 cm) from a floor. Thus, the microphone will be at an optimal level to accept a speaker's commands, particularly in circumstances in which the speaker is seated. Obviously, where an AC outlet or a light is controlled by more than one wall switch, a microphone of at least one of the voice activated wall switches is more likely to be in proximity to the user (speaker).
The outlet embodiment of the present invention is plugged into an AC outlet. The outlet embodiment has at least one plug for accepting the cord jack of an electrical device and may necessarily be plugged into an outlet behind an object that obstructs the line from the user to the device. Therefore, this embodiment may include a separate attachable microphone that is placed in a location most likely to maintain an unobstructed line between the microphone and the user. The use of a separate microphone allows the microphone to be placed in a convenient location that is in close proximity to the user. This is particularly useful where the environment is noisy, or where the user is disabled or has low mobility. In other embodiments of the outlet and wall switch devices, the microphone circuit includes a receiver for receiving transmitted radio frequency signals from a separate remote microphone. These embodiments are desirable for users who cannot effectively trigger the speech recognition because they are not in proximity to the device. For example, a user who is seated in a position outside the range of the microphone will be unable to control the device. An RF receiver will provide remote speech control of the speech recognition device. The voice-activated device is continuously listening for an acceptable speech command as long as power from the utility main is available. Thus the device is constantly processing background noises and establishing an ambient noise level. The ambient noise level is an average decibel level of the sounds in the frequency range of speech that are detected by the device. For example, a background noise level of a 50 decibel air conditioning unit causes the device to establish an ambient noise level of 50 decibels. Detected sounds below that level are ignored, and in order for the device to act upon a command word, the user must speak above that decibel level. Establishing an ambient noise level enables the device to be used in noisy environments.
Upon receiving a signal in the frequency range of speech that is louder than the ambient level, the device determines whether the signal is a valid command word. A valid command word is a member of either a set of pre-programmed speaker independent words, or a set of user programmed speaker dependent words. These sets of command words correspond to two modes of operation known in the art as "speaker independent" and "speaker dependent" operation. The user has a choice of the mode of operation upon resetting the device. In the preferred wall switch embodiment of the invention, reset is activated by pressing the touch plate a specified number of times. Reset of the outlet embodiment occurs when the device is initially plugged into an outlet. The first mode of operation is a speaker independent mode. In this mode the device can be used by various speakers and does not have to be trained to recognize individual voices. Therefore, the device is preprogrammed to respond to a large variety of speech patterns, inflections, and enunciations of the target command word. This mode of operation usually has a lower number of valid command words than a speaker dependent systems that require more memory to store the various speech patterns. In the preferred embodiments, speaker independent command words include a name of an electrical device such as "LIGHTS" followed by action command words such as "ON" or "OFF" or "DIM." A speaker dependent mode of operation recognizes only one speaker, or a limited number of speakers at a time. The speaker dependent mode is activated by resetting and "programming" the device. After detecting a reset condition, the device listens for a request to select the speaker dependent mode, and the user follows instructions to program the command words. In a preferred embodiment the user is prompted by the device through use of a user interface which includes prompts from an indicator such as an LED, or speech instructions from the device itself, or both. The device, operating in a speaker dependent mode of operation, achieves a high accuracy of word recognition. The disadvantage to using this mode of operation is that the system response accuracy is limited to the user who programmed the valid command set.
The device limits user frustration by signaling an acceptance of a valid command word through a user interface that includes an indicator such as an LED, or a speaker for communicating speech prompts. The feedback of the user interface permits the user to adjust his or her command word enunciations and inflections which results in a higher response accuracy. Once the device recognizes and indicates acceptance of a valid command word, the user responds with an action command word such as "ON." If the action command word is within the set of valid command words, the device will respond by performing the desired action. For example, in the preferred wall switch embodiment, the device responds to the action command word by connecting power or disconnecting power to an electrical circuit that is connected to the wall switch. For applications where the action command word is meant to dim or bri-ghten lights, the device responds by connecting AC at a reduced or increased voltage. In an alternate mode of operation, the action command word in not used, and the command word such as "LIGHTS" is repeated to toggle the lights on or off.
In a another embodiment of the present invention, the device incorporates current carrier modulation techniques as disclosed in U.S.
Patent 3,81 8,481 of Dorfman, which patent is incorporated herein by this reference. Using this technology, the device recognizes a variety of electrical product command words, where only one command word is valid for the attached product. Other valid command words are transmitted over the utility main to a second device directly connected to the utility main or plugged into an AC outlet. The second device demodulates the command word and makes a determination of whether the command word is contained within its set of valid command words, and whether the command word corresponds to its attached product. The present invention provides a compact, continuously listening speech recognition circuit that may be incorporated into a variety of designs including wall switches and portable outlet devices. A voice activated wall switch or wall outlet provides an improved method for controlling electrical devices. Limitations of the prior art, including the need for complex computer-controlled systems, user frustration, use in noisy environments, and limited speech command sets, are overcome by the present invention to increase response accuracy and device utility.
BRIEF DESCRIPTION OF THE DRAWINGS Understanding of the present invention will be facilitated by consideration of the following detailed description of preferred embodiments of the present invention taken in conjunction with the accompanying drawings, in which like numerals refer to like parts, and in which: Figure 1 is a block diagram of a speech controlled device; Figure 2 is a perspective drawing of a portable speech activated device; Figure 3a is a flowchart of a minimal functionality of a programming code for a preferred embodiment of the speech control device;
Figure 3b is a continuation of the flowchart of Figure 3a for an on/off application; Figure 3c is a continuation of the flowchart of Figure 3a for a dimmer application; Figure 4 is a front view of a wall switch of a preferred embodiment;
Figure 5 is a side view of a wall switch of a preferred embodiment; Figure 6 is a block diagram of a speech activated wall switch; Figure 7a is a flowchart of a wall switch of a preferred embodiment; Figure 7b is a continuation of the flowchart of Figure 7a for a default mode of a preferred embodiment; Figure 7c is a continuation of the flowchart of Figure 7a for a first user independent mode; Figure 7d is a continuation of the flowchart of Figure 7a for a second user independent mode;
Figure 7e is a continuation of the flowchart of Figure 7a for a third user independent mode; Figure 7f is a continuation of the flowchart of Figure 7a for a speaker dependent mode; Figure 7g is a continuation of the flowchart of Figure 7f; and
Figure 7h is a continuation of the flowcharts of Figure 7f and 7g.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
"Speech recognition" refers to the ability of a device to recognize what words have been spoken and to take specific actions according to those words recognized. Figure 1 is a block diagram of a preferred embodiment of the electrical components of a speech recognition device. A microphone 2 accepts and converts speech and other sounds into electrical audio signals. The electrical audio signals at the output of the microphone 1 are amplified by an input amplifier 4 and fed into a bandpass filter 6. The band pass filter 6 of the preferred embodiment is designed to filter signals outside of the frequency range of approximately 580 Hz to 4.2 kHz, which represents the typical frequency range of speech. The filtered audio signal is introduced into an automatic gain control circuit 8. The phrase "automatic gain control" usually refers to a feedback loop that accepts a varying input signal and uses feedback to maintain a constant output signal. The automatic gain control circuit 8 of the preferred embodiment operates in a different manner by supplying a continuous ambient level signal to the microcontroller 20 over a predetermined time window. The microcontroller 20 maintains the ambient level during the time window by sending feedback signals 36, 38 to the automatic gain control circuit 8. The ambient level is used as a starting level for recognizing speech. Any background noise received by the device during the time window that is below the ambient level is ignored. After the time window has expired, the device establishes a new ambient level signal.
Establishing an ambient level is desirable feature in noisy environments because the user need only speak above the ambient level to trigger the device. Additionally, the ambient level increases the device accuracy because the device will not falsely trigger in response to a stray or constant background noise. In the preferred embodiment, a time window has a duration in the range of 5 seconds to 1 minute, where approximately 5 seconds is an ideal duration. Obviously, the particular pre-determined time window may vary with the type of environment, and is not meant to be a limiting factor. For example, in a particularly quiet environment, the time window may be of a longer duration because changing background noises are not expected.
In the preferred embodiment, the automatic gain control loop includes the automatic gain control circuitry 8, the microcontroller 20, an amplifier 1 0, and a multiplying buffer 1 2. The output of the automatic gain control circuitry 8 is fed into an amplifier 10. The output of the amplifier 1 0 is fed into the microcontroller 20 and a multiplying buffer 1 2. Finally, the output of the multiplying buffer is also fed into the microcontroller. Thus, the "ambient level" that is sent to the microcontroller 20 consists of both a zero level 40 at the output of the amplifier 1 0, and a multiplied level 42 at the output of the multiplying buffer. The microcontroller 20 of the preferred embodiment of Figure 1 is a general purpose microcontroller manufactured by Sensory, Inc.™ which is configurable for a variety of applications including speech recognition. The microcontroller of the preferred embodiment requires the zero level 40 and the multiplied level 42 to produce feedback signals 36, 38. However, other embodiments of the invention using different microcontrollers may have differing input requirements to maintain the ambient level that is established using feedback signals 36, 38. Also, the automatic gain control circuitry 48 may be deleted from the circuit of Figure 1 for embodiments where the microcontroller includes equivalent circuitry of the automatic gain control 8, amplifier 10, and multiply buffer 1 2. For such embodiments, the output of the band pass filter 6 is directly accepted by the microcontroller 20. As additional functions become available on the microcontroller 20, other circuit functions such as program memory band pass filter and input amplifier may eliminate the need for separate circuits to provide these functions.
The microcontroller circuitry includes the microcontroller 20 and a number of memory modules. The memory modules of the preferred embodiment include the program memory 22 and speech command memory 1 4 which are shown external to the microcontroller 20, but which may be internal to the microcontrollers of other embodiments of the invention. The program memory 22 is a Read Only Memory (ROM) module which stores the programming code of the microcontroller 20. The programming code establishes the sequence of events that are followed by the device to produce a control signal 44 in response to valid speech commands. The speech command memory 1 4 of the preferred embodiment employs a Random Access Memory (RAM) module which stores the speaker dependent speech commands. The speaker independent speech commands are stored in a separate memory ROM module (not shown) which may be internal to the microcontroller. The term "memory module" does not necessarily refer to separate circuit elements. For example, all ROM data may be stored in the same circuit element, but at different address block locations.
Power circuitry of the preferred embodiment which supplies analog and digital operating voltages to the device circuitry includes an AC source circuit 24, an AC to DC power supply circuit 26, an analog DC power supply circuit 28, and a digital DC power supply circuit 30. Standard utility AC is supplied to the AC source circuit 24 by means of a standard AC jack that is plugged in to a standard AC outlet 72, as shown in Figure 2. The device may be adapted to be compatible with a 1 20 V or 230 V AC standard. The AC to DC power supply circuit 26 convert the standard utility AC to DC voltages which are fed into the analog DC power supply circuit 28 and the digital DC power supply circuit 30. The analog DC power supply circuit 28 supplies power to the input amplifier 4 and the microcontroller 20. The Digital DC power supply supplies digital voltages to the microcontroller 20.
Standard utility AC is also supplied to an AC detect circuit 32 which is connected to the microcontroller 20 and the output control circuit 1 6. Upon recognition of a valid speech command, the microcontroller sends a control signal 44 to the output control circuit 1 6. The control signal 44 enables or disables a connection of the standard utility AC into the output control circuit 1 6. In the preferred embodiment, the output control circuit 1 6 includes a power switch that connects the standard utility AC to a standard AC plug.
For applications where the device is used as a dimmer, the AC detect circuit 32 synchronizes an AC signal with the standard utility AC to produce an AC waveform having an increased or decreased voltage. The AC waveform is connected to the output control circuit 1 6 by the control signal 44 upon recognition of a valid speech command.
In a second embodiment of the invention the output control circuit 1 6 has modulation capability and can modulate and transmit a control signal on the standard utility AC via the AC source circuit 24 using current carrier technology as described herein. This capability allows the device to remotely control electronics and appliances that are connected to the same standard utility AC circuit. The AC detect circuit may also include a demodulator to detect and demodulate a signal from the standard utility AC. The demodulated signal is sent to the microcontroller 20 for a determination of whether the demodulated signal represents a valid speech command for an electrical appliance that is attached to the device. If the signal is a valid speech command, control signal 44 is sent to enable the output control circuit 1 6.
The user interface circuit 34 of the preferred embodiment is connected to the microcontroller 20 and includes an indicating device such as an LED, or a speaker, or both. The user interface circuit 34, which is provided as a convenience for the user and does not affect the operation of the device, informs the user that the device has received either an invalid or a valid speech command. Without the visual or audio feedback, the user can not be certain of the reason for a non-response of the device. For example, the user may not be enunciating the command correctly, or may be using an invalid speech command. Thus, the user feedback lessens frustration and leads to an increase in correct device responses. The indicating device includes at least one LED which may be uni- or multi-colored to prompt the user, signal an unrecognized command, and/or signal acceptance of a valid command. The indicating device may also include a display or another means of visually indicating the completion of an event. The speaker is connected to the microcontroller 20 via amplifier circuitry known in the art and provides a means for communicating spoken instructions or audio prompts which are stored in a memory module of the microcontroller 20. Obviously, a combination of an LED and a speaker will provide the highest degree of user convenience. Figure 2 is a perspective view of one embodiment of a speech activated device. The device consists of a shell 60 that houses the speech recognition circuitry of Figure 1 , and is designed to be compact and self-contained such that the entire device plugs into a utility outlet 72. The shell contains several openings for components of a speech recognition circuit, and may include openings for an indicating device 64, a speaker 70, a microphone 62, a microphone plug 68, a standard AC utility plug 66, and a standard utility jack (not shown) which plugs into the utility outlet 72. The electronic product or appliance cord 74 is plugged into a standard AC utility plug 66 which is located on a face of the shell 60. The program stored in the program memory 22 of Figure 1 varies to accommodate the available device features and the desired mode of operation. Figure 3a is a flowchart of functionality of a sample programming code of a preferred embodiment and is not meant to limit the possible programming possibilities. Start block 1 00 represents the initial power-up of the device after it has been plugged into the utility outlet 72 shown in Figure 2. The start block 1 00 may also include additional routines such as a mode of operation routine that prompts the user to record valid speech commands for a speaker dependent application. If user response is not forthcoming, the device defaults to a speaker independent mode of operation. Once the microcontroller 20 has established the operating parameters set forth in the start block 1 00, the microcontroller 20 proceeds to configure its input/output (I/O) ports. The I/O configuration is pre-determined and will vary with the parameters chosen in the start block 1 00. The indicating device 64 has a default value, and for the minimal preferred embodiment, the default is an "off" state in which the indicating device 64 is not illuminated. Whether the indicating device 64 is "on" or "off" to indicate an active listening state is a matter of preference, and in an alternate embodiment, the indicating device 64 is illuminated as the default mode to indicate that the device is actively listening.
Block 1 04, which is the default block for most of the decisions blocks of the subsequent programming code, sets the indicating device into an "off" state. The first expected command word, which may be one of a set of first expected command words, is retrieved in block 1 06. The device waits for a pre-determined silence period of block 1 08. If there are no sounds which are within the frequency of speech and above an ambient level for the duration of the silence period, then the silence is acceptable 1 1 0 and the device waits for a first speech utterance 1 1 2. If the silence is not acceptable, the program defaults to the default block 1 04 and restarts the above process.
The silence period is a required limitation of the technology, and as the technology improves, the silence period will approach zero. Technology that requires the program to pause in-between words is referred to as discrete speech or isolated speech technology. Discrete speech recognition systems can only recognize words that are spoken separately. In contrast, continuous speech technology does not require phrases of natural speech to be broken into distinct words separated by silences. The device of the preferred embodiment employs discrete speech technology with a silence period on the order of .01 to .07 seconds. This silence period will vary according to the microcontroller 20 employed.
Block 1 1 2 represents the continuously listening feature of the device, and the first utterance does not have to occur within a set time period. Once an utterance occurs the signal, which is received through the microphone 2, is recorded in bock 1 1 2. If an acceptable recording has occurred 1 1 4, then the duration of the word is checked 1 1 6. An acceptable recording 1 14 is a recording which contains data within the frequency range of speech, and a duration 1 1 6 is the actual time that it took to utter the word. Typical durations of words are known because the acceptable command words are from a pre-determined set. Thus, the utterance can be no longer than the longest valid command word. If the utterance is longer than the longest valid command word 1 1 8, the program defaults to default block 1 04, otherwise, the utterance is compared to the words included in the set of valid command words. The set of valid command words of the minimal preferred embodiment includes the word "LIGHTS." If the recorded utterance of block 1 1 2 favorably compares with the pre-recorded samples of the word "LIGHTS" 1 22, then the device has found a positive match, and proceeds to retrieve the second word values 1 24 which constitutes a desired action such as "ON" or "OFF" or "LOW." The silence level is initialized in block 1 26, and has a minimum decibel level equal to the ambient noise level that is determined over a time window. The device listens for an acceptable silence level for a pre-determined time duration 1 28, and illuminates the indicating device. If the silence period is of an acceptable time duration 1 32, then the device records the second utterance 1 34. An acceptable recording 1 36 and utterance duration 1 38, 1 40 advances the program sequence to a point of recognizing the word 1 42. If the duration of the silence period or the recording is unacceptable, or the duration of the word is too long, then the program defaults to block 1 04, and the indicating device is turned off and the process starts again.
Figures 3b and 3c are continuations of the program flowchart in Figure 3a. Figure 3b represents the program for a simple "ON" or "OFF" application, and Figure 3c describes the program flow for a light dimmer application. In the application of Figure 3b, if the second utterance is "ON" 1 44, and a switch included in the output control circuit 1 6 is open 1 48, then a control signal 44 is sent to the output control circuit 1 6 to close the switch 1 50. The closed switch connects standard utility AC to a utility plug. The program returns to the default block 1 04, turns off the indicating device, and waits for a new sequence of speech commands. If the utterance is not "ON" 1 44, then the utterance is compared to the pre¬ recorded word "OFF" 1 52. If the utterance is determined to be a match 1 52, then a determination is made regarding whether the switch of the output control circuit 1 6 is closed 1 54. If the switch is closed 1 54, a control signal 44 is sent to the output control circuit 1 6 to open the switch and disconnect AC power from the utility plug 1 56. All other outcomes return the program to the default block 1 04.
The continuation of the flowchart for a light dimmer application is illustrated in Figure 3c. The utterance is compared to pre-recorded words including "LOW" 1 58, "MEDIUM" 1 60, "OFF" 1 62, and "ON" 1 64. If a match is identified, control signal 44 is sent to the output control circuit 1 6 to close 1 66, 1 68, 1 72 or open 1 70 the switch. Also, the microcontroller 20 communicates with the AC detect circuit 32 to send a reduced or increased AC voltage level 1 66, 1 68 to the output control circuit 1 6. A non-matching utterance defaults the program to the default block 1 04. Figures 1 , 2, 3a, 3b, and 3c illustrate a preferred embodiment of a portable, generally palm-sized speech recognition device. The preferred embodiment is plugged into a standard wall socket and includes at least one AC plug for accepting the cord/jack of any AC operated device. The speech recognition device is an economical solution to controlling electrical devices by speech commands.
Figures 4 and 5 illustrate another preferred embodiment wherein voice activated control circuitry is housed in a wall switch assembly 200 that includes a switch plate 21 0 and switch box 21 8. Figure 4 illustrates a front view of a wall switch plate 21 0. The preferred wall switch embodiment utilizes a capacitance touch plate 202 as the manual switching control. Other embodiments may utilize other touch pad technologies or mechanical switches. The switch plate 21 0 also includes a microphone 206 for accepting speech commands. The user interface of the preferred embodiment utilizes a green LED 208 and a red LED 204 to prompt the user and to indicated that the device is actively listening for a speech command. Other embodiments of the voice activated wall switch may utilize varying user interfaces including one or more LEDs of varying colors, one or more multi-colored LEDs, a character display device, a speaker for audio prompts, or any combination thereof. Standard switch plate screws 220 secure the switch plate 21 0 to the switch box 21 8.
Figure 5 is a cross section of the wall switch assembly 200. The switch box 21 8 houses a power circuit board 21 2 and a speech recognition circuit board 21 6 connected by at least one connector 222. In the preferred embodiment of the voice activated wall switch, the connector 222 includes connections for power signal lines and control signal lines. An aluminum base plate 21 4 provides structural support for the components of the switch box assembly 200. In addition, the aluminum plate 21 4 may act as a heat sink for various components on the power circuit board 21 2 by including wings or tabs that extend from the aluminum plate 21 4 to contact the power components.
The speech recognition circuit board 21 6 is a stand-alone item that may be incorporated into other electrical or electronic devices including various wall switch assemblies. Referring to Figure 6, the circuit board 21 6 has inputs that connect to a microphone 602 and AC source 624, and outputs that connect to one or more user interfaces 634, and a touch and dim controller 646 or any other suitable manual switch. Thus, the speech recognition circuit board 21 6 may be adapted to a particular application by connecting the inputs and outputs to appropriate components.
Figure 6 is a block diagram of a preferred embodiment of a speech recognition circuit board 21 6 and externally connected components for a voice activated wall switch assembly 200 as shown in Figures 4 and 5. The purpose and operation of the elements of the block diagram of Figure 6 are substantially similar to the elements of the block diagram of Figure 1 . The microphone 602 connects to an input to the speech recognition circuit board 21 6 and converts speech and other sounds to electrical audio signals. The electrical audio signals are amplified by an input amplifier 604 and filtered by a band pass filter 606 to exclude frequencies outside the frequency range of speech.
An automatic gain control circuit 648 accepts the filtered audio signal from the band pass filter 606 and establishes an ambient noise input level for microcontroller 620. In other embodiments, the automatic gain control circuit 648 may be included in the microcontroller 620 allowing the output signal from the band pass filter 606 to be directly connected to the microcontroller 620.
The power circuitry of the preferred embodiment resides on the power circuit board 21 2 as shown in Figure 5. The power circuitry includes an AC source input circuit 624, an AC to DC power supply circuit 626, an analog DC power supply circuit 628, a digital DC power supply circuit 630, and an AC detect circuit 632. The AC source input circuit 624 is directly connected to an AC circuit provided to the wall switch. In other embodiments, portions of the power circuitry may reside on the speech recognition circuit board 21 6.
The microcontroller circuitry includes the microcontroller 620, program memory 622, and speech command memory 61 4 which are shown external to the microcontroller 20, but which may be internal to microcontrollers of other embodiments of the invention. The program memory 622 is a Read Only Memory (ROM) for storing programming code of the microcontroller 620. The program memory 622 or an additional ROM stores speaker independent words. The speech command memory 61 4 of the preferred embodiment stores speaker dependent speech commands that are programmed by a user into the device during a programming mode.
The user interface 634 of the preferred embodiment of the voice activated wall switch assembly 200 includes a green LED 208 and a red LED 204 as illustrated in Figure 4. Other embodiments may include a single LED, or any other type of indicator that is controllable by the microcontroller 620. The user interface 634 provides visual prompts for the user to indicate that the circuit is operating and accepting speech commands or programming mode inputs of speaker dependent commands.
Output control signal 644 is generated by the microcontroller 620 for instructing the output control circuitry 61 6 to switch power on or off, to toggle power, or to reduce power, i.e. dim lights. The output control circuitry 61 6 of the preferred embodiment is located on the power circuit board 21 2, and is connected to the speech recognition circuit board 21 6 via the connector 222. It should be noted that the division of circuitry between the speech recognition circuit board 21 6 and the power circuit board 21 2 is a matter of design convenience, only. Other embodiments of the voice activated wall switch may vary the locations of the electrical components.
The voice activated wall switch has a manual touch and dim controller 646 that includes the touch pad 202 of Figures 4 and 5 for manually controlling a switch located in the output control circuitry 61 6. The speech control and the touch control are simultaneously active. In addition, the current state of the power is known by the microcontroller 620. For example, if an electrical device, e.g. a light, is "ON, " then pressing the touch pad 202 will toggle the power "OFF. " If a user subsequently uses a speech command "LIGHTS, " the power is switched "ON. "
Figures 7a through 7h illustrate a flow diagram for the microcontroller program ("the program") of a preferred embodiment of the voice activated wall switch. Referring to Figure 7a, start blocks 700 and 702 represent an initial application of AC power to the wall switch circuitry that occurs during installation of the device. A red LED 204, as shown in Figure 4, is illuminated 704 to indicate that the device has power. The green LED 208 is also illuminated 706 to indicate that the device is listening for a word 708. If the user does not issue a speech command 71 0, then the program branches to a default mode 71 2. If the user utters a user independent command as shown in decision blocks 71 4, 71 6, 71 8, then the program branches to the appropriate mode of operation. The command "PROGRAM" 720, causes the microcontroller 620 to initiate a programming mode to learn user dependent commands.
An unrecognizable command, i.e. a command that is not in microcontroller memory, causes the program to branch to default mode 722 shown in Figure 7b.
Figure 7b illustrates a Default Mode 722 of the wall switch of the preferred embodiment. The microcontroller 620 determines whether the user has pressed the touch pad three times 724 for the purpose of resetting the mode of operation. Other embodiments of the program may use a different mode reset requirement, e.g. two quick presses to the touch pad rather than three. As described herein, a single press to the touch pad toggles the state of the applied AC power to "on" or "off. " In the preferred embodiment of the voice activated wall switch, three presses to the touch pad cause the program to branch to the start of the mode selection sequence, block 704 of Figure 7a. If a reset condition is not detected, the green LED 208 is illuminated 726, and the voice activated device waits for a command word. If the user says the command " LIGHTS" 728, the green LED 208 is turned off and the microcontroller 620 sends a control signal 644, as shown in Figure 6, to the output control circuit 61 6, to toggle the power 734. Upon toggling the power in block 734, the program loops back to the start of default mode 722.
A Lights On/Off Mode is illustrated in Figure 7c. The program checks for a reset condition 740. If the user has not initiated a reset, the green LED 208 and the red LED 204 illuminate 742 to indicate that the voice activated device expects to receive an acceptable first word. Any detected word other that "LIGHTS" 744, causes the program to branch to the start of the Lights On/Off Mode 738. If the word "LIGHTS" is detected 744, the green LED 208 is illuminated 748 to indicate that the device expects to receive a second acceptable word, "ON" or "OFF" 750. If the device does not detect either acceptable word 750, the program branches to the start of the Lights On/Off Mode 738. If the device detects either the word "ON" or "OFF" 750, the green LED 208 is turned off and the power is toggled accordingly 754.
Figure 7d illustrates the user independent Computer Lights On/Off Mode 758. Absent a reset condition 760, the device illuminates both LEDs 204, 208, and waits for an acceptable first word 762. If the word "COMPUTER" is detected 764, the green LED 208 is illuminated 768 to indicate that the device is waiting for a second word, "LIGHTS, " "ON, " or "OFF" 770. Upon detection of an acceptable word, the green LED 208 is turned off and the power is toggled accordingly 774. Figure 7e also illustrates a user independent mode in blocks 788 through 804. If the user does not reset the mode 790, the LEDs illuminate 792. The first expected word is "INTELSWITCH" 794 and the second expected word is either "ON" or "OFF" 800. If the words are not detected in the appropriate order, the program turns off the illuminated LEDs 796, 802 and branches to the start of the mode 788. An acceptable sequence of commands causes the microcontroller 620 to send a control signal 644 to switch the power either "ON" or "OFF" 804.
The program flowchart of the preferred embodiment of a voice activated wall switch illustrates a default mode and three additional user independent modes. The words used as command words for these illustrative modes are not meant to be limiting, and other words and sequences of words may be programmed into program memory.
Figures 7f, 7g, and 7h illustrate the flowchart for a speaker dependent mode 806. The red and green LEDs 204, 208 are flashed once to indicate that the device is waiting to record the first word 808. The green LED 208 is then illuminated to indicate that the device is listening for the first word 81 0. If the device has not recorded a third silence period 81 2, then the program determines whether an acceptable recording has occurred 81 6, and either branches to the start of the user dependent mode 808, or indicates a valid recording 81 8 by flashing the red LED 204. Upon detection of a third silence period, the program checks whether words have been recorded in memory 81 4. If memory contains user dependent words, the program branches to wait for a user dependent command, [f the recorded word is the first recording 820, the program stores the word pattern in a temporary memory 822 and branches to 808 to prompt the user to repeat the word. If the recorded word is the second or greater recording 820, the program compares the currently recorded word with the previously recorded word 824. If the word patterns match 826, then the first word is saved in memory 828. If the word patterns do not match 826, the program branches to 808 to prompt the user to repeat the command word until the user can repeat the word in a substantially similar manner.
Figure 7g is a continuation of the user dependent programming mode. Once a first word has been successfully recorded, the LEDs are flashed to indicate that the device is waiting for a second word 834. The green LED 208 is illuminated to indicate that the unit is ready to record the second word 836. If a silence period is recorded three times 838, then the program branches to detect a valid command word. If a valid word pattern is recorded 840, the red LED 204 flashes once 842. If the user has not repeated the word 844, the word is recorded in a temporary memory 852, and the program branches to the beginning of the sequence to record a second word 834. If the user has repeated the word at least once 844, the program compares the current word pattern with the previously recorded word pattern 846 to determine whether a match exists 848. If the patterns match 848, the user has successfully programmed the second word which is stored in a second position in memory 850.
Once a sequence of user dependent commands is successfully recorded, the device is ready to respond to user dependent commands as illustrated in the flowchart of Figure 7h. If a reset condition is not detected 856, the device illuminates the green LED 208 to indicate that it is waiting to accept a command 858. If a speech command is successfully recorded 860, the LED is turned off 864 and the program determines whether the word is a valid user dependent word. If the word is recorded in memory 866, the power is toggled 868. Otherwise, the program loops back to 856 to await a valid user dependent command.
The flowchart of Figure 7h illustrates a minimal example of recognizing a single user dependent word. However, the other embodiments of the program may require a sequence of words before the power is toggled 868. The other embodiments of the program may also respond according to a particular command such as "DIM " or "ON" or "OFF. " Thus, the flowchart is presented for illustrative purposes and is not meant to limit the breadth of the microcontroller program. It is evident that there are additional embodiments which are not illustrated above but which are clearly within the scope and spirit of the present invention. The above description and drawings are therefore intended to be exemplary only and the scope of the invention is to be limited solely by the appended claims.
WE CLAIM:

Claims

1 . A device for responding to a speech command of a device user, said device comprising: an AC circuit having means for producing a plurality of operating voltages for said circuit, said AC circuit having an input of a standard utility AC; a microphone circuit having means for producing electrical signals in response to sounds; a filter circuit having means for filtering said electrical signals, said filter circuit producing filtered electrical signals having frequencies within a frequency range of speech; a microcontroller circuit having means for detecting a valid speech command of a plurality of valid speech commands, said microcontroller having a plurality of inputs comprising said filtered electrical signals and a plurality of outputs comprising at least one control signal for controlling at least one switching means; said at least one switching means for connecting said standard utility AC to said at least one AC circuit; a manual control input circuit for manually controlling said at least one switching means; and at least one indicator connected to said microcontroller, said at least one indicator for prompting said device user to other speech commands when said microcontroller circuit is enabled to receive a speed command.
2. The device for responding to a speech command as in claim 1 , further comprising a shell for encasing said circuit, said shell comprising a wall switch assembly.
3. The device for responding to a speech command as in claim 1 , further comprising an automatic gain control circuit having means for producing an ambient level signal from said filtered electrical signals.
4. The device for responding to a speech command as in claim 1 , said device further comprising: a means for choosing a plurality of modes of operation, a first mode of operation of said plurality of modes of operation for programming said device with said plurality of valid speech commands.
5. The device for responding to a speech command as in claim 1 , further comprising a dimmer circuit having means for producing a reduced-voltage AC signal, wherein said at least one switching means connects said reduced-voltage AC signal to said at least one AC circuit.
6. The device for responding to a speech command as in claim 1 , said device further comprising: a means for producing speech instructions; and at least one speaker means for outputting said speech instructions.
7. The device for responding to a speech command as in claim 1 , said device further comprising: a modulator circuit having means for modulating said standard utility AC with a second control signal of said at least one control signal.
8. The device for responding to a speech command as in claim 6, said device further comprising: a demodulator circuit means connected to said microcontroller for demodulating a modulated control signal from said standard utility AC.
9. An apparatus for producing a plurality of control signals, said apparatus comprising: a microphone circuit for receiving sounds and converting said sounds into electrical signals, said microphone circuit having an input amplifier for amplifying said electrical signals, and a band pass filter for filtering sounds outside a frequency range of speech, said band pass filter outputting a filtered signal; a processor circuit comprising: a microcontroller having an input of said filtered signal, said microcontroller producing at least one AC control output signal upon recognition of a valid speech command of a set of valid speech commands, said set of valid speech commands comprising user speech commands and preprogrammed speech commands; a plurality of memory modules for storing data, said data comprising: at least one instruction set for said microcontroller; said user speech commands; and said pre-programmed speech commands; a power circuit for supplying digital and analog power to said control circuit, said power circuit receiving an AC input from an AC circuit; an AC output circuit for connecting an AC signal to at least one AC circuit, said AC output circuit enabled by said at least one AC control output signal; a manual control circuit for manually controlling said AC output circuit; and a casing for holding said control circuit.
1 0. The apparatus as in claim 9, wherein said processor circuit further comprises an automatic gain control circuit for producing at least one ambient level signal from said filtered signal.
1 1 . The apparatus as in claim 9, wherein said casing is designed to replace a wall switch assembly.
1 2. The apparatus as in claim 9, further comprising at least one indicator for prompting a user of said apparatus and for confirming recognition of said valid speech command.
1 3. The apparatus as in claim 9, wherein said band pass filter filters out sounds outside a frequency range of 580 Hz - 4.2 kHz.
1 4. The apparatus as in claim 9, wherein said control circuit further comprises a speaker circuit, said speaker circuit for communicating speech instructions from said processor circuit.
1 5. The apparatus as in claim 9, wherein said power circuit further comprises a dimmer circuit for producing an AC signal having a reduced voltage at a standard utility frequency.
1 6. A method for controlling a device using speech recognition, said device having an input of standard utility AC and an AC output switch connected to an AC circuit, said method comprising the steps of: accepting environmental sounds and speech sounds; converting said environmental sounds and speech sounds into a plurality of electrical signals; filtering said plurality of electrical signals that are outside a range of speech to produce at least one filtered electrical signal; comparing said at least one filtered electrical signal with a predetermined set of speech sounds; and producing at least one output signal when said at least one filtered electrical signal matches said predetermined set of speech sounds; wherein said at least one output signal controls said AC output switch.
1 7. The method for controlling a device as in claim 1 6, further comprising the step of: toggling said AC output switch utilizing a manual control means.
1 8. The method for controlling a device as in claim 1 7, wherein said step of toggling said AC output switch comprises the step of: pressing a touch pad.
1 9. The method for controlling a device as in claim 1 6, further comprising the step of: illuminating at least one indicator to indicate a ready state for accepting said speech sounds.
PCT/US1998/027648 1998-01-02 1998-12-30 Voice activated switch method and apparatus WO1999035637A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU20960/99A AU2096099A (en) 1998-01-02 1998-12-30 Voice activated switch method and apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US243698A 1998-01-02 1998-01-02
US09/002,436 1998-01-02
US09/133,724 US6188986B1 (en) 1998-01-02 1998-08-13 Voice activated switch method and apparatus
US09/133,724 1998-08-13

Publications (1)

Publication Number Publication Date
WO1999035637A1 true WO1999035637A1 (en) 1999-07-15

Family

ID=26670372

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/027648 WO1999035637A1 (en) 1998-01-02 1998-12-30 Voice activated switch method and apparatus

Country Status (3)

Country Link
US (2) US6188986B1 (en)
AU (1) AU2096099A (en)
WO (1) WO1999035637A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6594630B1 (en) 1999-11-19 2003-07-15 Voice Signal Technologies, Inc. Voice-activated control for electrical device
WO2013076606A1 (en) * 2011-11-07 2013-05-30 Koninklijke Philips Electronics N.V. User interface using sounds to control a lighting system

Families Citing this family (138)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6188986B1 (en) * 1998-01-02 2001-02-13 Vos Systems, Inc. Voice activated switch method and apparatus
DE19825760A1 (en) * 1998-06-09 1999-12-16 Nokia Mobile Phones Ltd Procedure for assigning a selectable option to an actuator
JP4812941B2 (en) * 1999-01-06 2011-11-09 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Voice input device having a period of interest
US6385581B1 (en) * 1999-05-05 2002-05-07 Stanley W. Stephenson System and method of providing emotive background sound to text
SE9902229L (en) * 1999-06-07 2001-02-05 Ericsson Telefon Ab L M Apparatus and method of controlling a voice controlled operation
US6509730B1 (en) * 2000-02-25 2003-01-21 International Resources Group Ltd. Method of environmental performance measurement
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
WO2002011497A1 (en) * 2000-07-27 2002-02-07 Color Kinetics Incorporated Lighting control using speech recognition
GB0029573D0 (en) * 2000-12-02 2001-01-17 Hewlett Packard Co Activation of voice-controlled apparatus
DE10060587A1 (en) * 2000-12-06 2002-06-13 Philips Corp Intellectual Pty Automatic control of actions and operations during lectures and recitals, involves comparing lecture or recital text with stored words or word combinations
US7039590B2 (en) * 2001-03-30 2006-05-02 Sun Microsystems, Inc. General remote using spoken commands
US7418392B1 (en) * 2003-09-25 2008-08-26 Sensory, Inc. System and method for controlling the operation of a device by voice commands
US20050256720A1 (en) * 2004-05-12 2005-11-17 Iorio Laura M Voice-activated audio/visual locator with voice recognition
US20060146652A1 (en) * 2005-01-03 2006-07-06 Sdi Technologies, Inc. Sunset timer
US20060271368A1 (en) * 2005-05-25 2006-11-30 Yishay Carmiel Voice interface for consumer products
US20060287864A1 (en) * 2005-06-16 2006-12-21 Juha Pusa Electronic device, computer program product and voice control method
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US20080091432A1 (en) * 2006-10-17 2008-04-17 Donald Dalton System and method for voice control of electrically powered devices
US7894942B2 (en) * 2007-06-22 2011-02-22 Dsa, Inc. Intelligent device control system
US7765033B2 (en) * 2007-06-22 2010-07-27 Dsa, Inc. Intelligent device control system
US8099289B2 (en) * 2008-02-13 2012-01-17 Sensory, Inc. Voice interface and search for electronic devices including bluetooth headsets and remote systems
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US8508356B2 (en) 2009-02-18 2013-08-13 Gary Stephen Shuster Sound or radiation triggered locating device with activity sensor
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US10705794B2 (en) * 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
CN201674409U (en) * 2010-05-10 2010-12-15 维尔斯电子(昆山)有限公司 Voice control type power supply device
US9370081B2 (en) * 2010-11-29 2016-06-14 Antonio Pantolios Josefides System and method for a delayed light switch network
US20150106089A1 (en) * 2010-12-30 2015-04-16 Evan H. Parker Name Based Initiation of Speech Recognition
US8914287B2 (en) * 2010-12-31 2014-12-16 Echostar Technologies L.L.C. Remote control audio link
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US8340975B1 (en) * 2011-10-04 2012-12-25 Theodore Alfred Rosenberger Interactive speech recognition device and system for hands-free building control
US8666751B2 (en) 2011-11-17 2014-03-04 Microsoft Corporation Audio pattern matching for device activation
CN102540946A (en) * 2012-02-15 2012-07-04 浙江大学 Singlechip-based prerecording anti-jamming sound controller
EP2639793B1 (en) * 2012-03-15 2016-04-20 Samsung Electronics Co., Ltd Electronic device and method for controlling power using voice recognition
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9779757B1 (en) 2012-07-30 2017-10-03 Amazon Technologies, Inc. Visual indication of an operational state
US9786294B1 (en) 2012-07-30 2017-10-10 Amazon Technologies, Inc. Visual indication of an operational state
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
US9043210B1 (en) 2012-10-02 2015-05-26 Voice Security Systems, Inc. Biometric voice command and control switching device and method of use
US8862476B2 (en) * 2012-11-16 2014-10-14 Zanavox Voice-activated signal generator
KR101732137B1 (en) * 2013-01-07 2017-05-02 삼성전자주식회사 Remote control apparatus and method for controlling power
US9721586B1 (en) 2013-03-14 2017-08-01 Amazon Technologies, Inc. Voice controlled assistant with light indicator
US10748529B1 (en) * 2013-03-15 2020-08-18 Apple Inc. Voice activated device for use with a voice-based digital assistant
EP3000241B1 (en) 2013-05-23 2019-07-17 Knowles Electronics, LLC Vad detection microphone and method of operating the same
US9711166B2 (en) 2013-05-23 2017-07-18 Knowles Electronics, Llc Decimation synchronization in a microphone
US10020008B2 (en) 2013-05-23 2018-07-10 Knowles Electronics, Llc Microphone and corresponding digital interface
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
KR101922663B1 (en) 2013-06-09 2018-11-28 애플 인크. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10873997B2 (en) * 2013-08-01 2020-12-22 Fong-Min Chang Voice controlled artificial intelligent smart illumination device
US9502028B2 (en) 2013-10-18 2016-11-22 Knowles Electronics, Llc Acoustic activity detection apparatus and method
US9147397B2 (en) * 2013-10-29 2015-09-29 Knowles Electronics, Llc VAD detection apparatus and method of operating the same
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
TW201640322A (en) 2015-01-21 2016-11-16 諾爾斯電子公司 Low power voice trigger for acoustic apparatus and method
US10121472B2 (en) 2015-02-13 2018-11-06 Knowles Electronics, Llc Audio buffer catch-up apparatus and method with two microphones
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US9478234B1 (en) 2015-07-13 2016-10-25 Knowles Electronics, Llc Microphone apparatus and method with catch-up buffer
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US9678954B1 (en) * 2015-10-29 2017-06-13 Google Inc. Techniques for providing lexicon data for translation of a single word speech input
CN106653010B (en) 2015-11-03 2020-07-24 络达科技股份有限公司 Electronic device and method for waking up electronic device through voice recognition
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US9826599B2 (en) * 2015-12-28 2017-11-21 Amazon Technologies, Inc. Voice-controlled light switches
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179588B1 (en) 2016-06-09 2019-02-22 Apple Inc. Intelligent automated assistant in a home environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10506204B2 (en) 2016-10-10 2019-12-10 At&T Digital Life, Inc. State detection and voice guided setup for a video doorbell unit
US20180124901A1 (en) * 2016-10-12 2018-05-03 Sampath Sripathy Speech-activated dimmable led
KR102623272B1 (en) * 2016-10-12 2024-01-11 삼성전자주식회사 Electronic apparatus and Method for controlling electronic apparatus thereof
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
CA3155440A1 (en) 2017-02-07 2018-08-16 Lutron Technology Company Llc Audio-based load control system
JP7190446B2 (en) 2017-05-08 2022-12-15 シグニファイ ホールディング ビー ヴィ voice control
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US10334702B1 (en) * 2017-06-26 2019-06-25 Amazon Technologies, Inc. Communication interface for front panel and power module
US10424299B2 (en) * 2017-09-29 2019-09-24 Intel Corporation Voice command masking systems and methods
EP3729650A4 (en) 2017-12-20 2021-08-18 Hubbell Incorporated Gesture control for in-wall device
MX2020006624A (en) 2017-12-20 2020-09-14 Hubbell Inc Voice responsive in-wall device.
CN109286706B (en) * 2018-10-12 2021-01-26 京东方科技集团股份有限公司 Display device
DE102019124230B4 (en) * 2019-09-10 2023-01-12 Infineon Technologies Ag ELECTRONIC SHUTDOWN DEVICE AND METHOD OF SHUTDOWN AN EQUIPMENT
US20220358915A1 (en) * 2021-05-10 2022-11-10 Roku, Inc. Voice command recognition system
US11898291B2 (en) * 2021-10-07 2024-02-13 Haier Us Appliance Solutions, Inc. Appliance having a user interface with programmable light emitting diodes

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2840132A1 (en) * 1978-09-15 1980-03-27 Licentia Gmbh Power line socket control using radio receiver - which is placed near special socket outlet, different from standard outlets, and is plugged in as load control
US5488273A (en) * 1994-11-18 1996-01-30 Chang; Chin-Hsiung Ceiling fan and light assembly control method and the control circuit therefor
DE29713054U1 (en) * 1996-10-09 1997-11-06 Impex Handelsgesellschaft Mbh Voice controlled clock
DE29718636U1 (en) * 1997-10-21 1998-02-12 Rosenbaum Lothar Phonetic control, input and communication device with acoustic feedback, especially for woodworking machines

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3555192A (en) * 1969-07-08 1971-01-12 Nasa Audio signal processor
US3818481A (en) 1972-08-14 1974-06-18 Codata Corp Multiple address direct coupled communication and control current carrier system
US4052568A (en) * 1976-04-23 1977-10-04 Communications Satellite Corporation Digital voice switch
US4119797A (en) 1977-06-29 1978-10-10 Technology Development Corporation Voice operated switch having an activation level which is higher than its sustaining level
FR2502370A1 (en) * 1981-03-18 1982-09-24 Trt Telecom Radio Electr NOISE REDUCTION DEVICE IN A SPEECH SIGNAL MELEUR OF NOISE
GB8613327D0 (en) 1986-06-02 1986-07-09 British Telecomm Speech processor
US4843627A (en) 1986-08-05 1989-06-27 Stebbins Russell T Circuit and method for providing a light energy response to an event in real time
US4829576A (en) 1986-10-21 1989-05-09 Dragon Systems, Inc. Voice recognition system
JPH0719662B2 (en) 1988-09-20 1995-03-06 和芙 橋本 Lighting lamp automatic switching device
US5086385A (en) * 1989-01-31 1992-02-04 Custom Command Systems Expandable home automation system
US5351272A (en) * 1992-05-18 1994-09-27 Abraham Karoly C Communications apparatus and method for transmitting and receiving multiple modulated signals over electrical lines
JPH03203794A (en) 1989-12-29 1991-09-05 Pioneer Electron Corp Voice remote controller
US5631375A (en) 1992-04-10 1997-05-20 Merrell Pharmaceuticals, Inc. Process for piperidine derivatives
US5430826A (en) 1992-10-13 1995-07-04 Harris Corporation Voice-activated switch
US5493618A (en) 1993-05-07 1996-02-20 Joseph Enterprises Method and apparatus for activating switches in response to different acoustic signals
US5790754A (en) 1994-10-21 1998-08-04 Sensory Circuits, Inc. Speech recognition apparatus for consumer electronic applications
US6188986B1 (en) * 1998-01-02 2001-02-13 Vos Systems, Inc. Voice activated switch method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2840132A1 (en) * 1978-09-15 1980-03-27 Licentia Gmbh Power line socket control using radio receiver - which is placed near special socket outlet, different from standard outlets, and is plugged in as load control
US5488273A (en) * 1994-11-18 1996-01-30 Chang; Chin-Hsiung Ceiling fan and light assembly control method and the control circuit therefor
DE29713054U1 (en) * 1996-10-09 1997-11-06 Impex Handelsgesellschaft Mbh Voice controlled clock
DE29718636U1 (en) * 1997-10-21 1998-02-12 Rosenbaum Lothar Phonetic control, input and communication device with acoustic feedback, especially for woodworking machines

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6594630B1 (en) 1999-11-19 2003-07-15 Voice Signal Technologies, Inc. Voice-activated control for electrical device
WO2013076606A1 (en) * 2011-11-07 2013-05-30 Koninklijke Philips Electronics N.V. User interface using sounds to control a lighting system
US9642221B2 (en) 2011-11-07 2017-05-02 Philips Lighting Holding B.V. User interface using sounds to control a lighting system

Also Published As

Publication number Publication date
US20010000534A1 (en) 2001-04-26
US6324514B2 (en) 2001-11-27
US6188986B1 (en) 2001-02-13
AU2096099A (en) 1999-07-26

Similar Documents

Publication Publication Date Title
US6324514B2 (en) Voice activated switch with user prompt
US7418392B1 (en) System and method for controlling the operation of a device by voice commands
US6230137B1 (en) Household appliance, in particular an electrically operated household appliance
US6397186B1 (en) Hands-free, voice-operated remote control transmitter
US10531540B2 (en) Intelligent lamp holder and usage method applied therein
US7783278B2 (en) Installation of a personal emergency response system
KR20010020875A (en) Method and apparatus for controlling voice controlled devices
CA2369901A1 (en) Remote-control device of lamp series control box
US20030061033A1 (en) Remote control system for translating an utterance to a control parameter for use by an electronic device
KR20010020876A (en) Method and apparatus for enhancing activation of voice controlled devices
JP2000122684A (en) Voice control insertion port
KR20010020874A (en) Method and apparatus for standard voice user interface and voice controlled devices
JPH01179855A (en) Method of voice control for air conditioner
KR20180074200A (en) A voice recognition lighting device
JP3341365B2 (en) Voice adapter
KR100423495B1 (en) Operation control system by speech recognition for portable device and a method using the same
JP2001318689A (en) Remote controller by means of speech recognition
KR200312791Y1 (en) Electric current control multi tap built in speech recognition device
KR200270064Y1 (en) Speech recognition switch driving circuit
KR20170120753A (en) Voice controlled lighting system and operating method thereof
CN215453210U (en) Microphone capable of being awakened by voice
JPH11120647A (en) Voice command remote controller
EP1079352A1 (en) Remote voice control system
KR100191202B1 (en) Apparatus and method for automatic on/off sleeping light in a telephone
CN209980779U (en) Wireless receiving device and wireless microphone system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: KR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase