EP1168299B1 - Method and system for preselection of suitable units for concatenative speech - Google Patents

Method and system for preselection of suitable units for concatenative speech Download PDF

Info

Publication number
EP1168299B1
EP1168299B1 EP01305403A EP01305403A EP1168299B1 EP 1168299 B1 EP1168299 B1 EP 1168299B1 EP 01305403 A EP01305403 A EP 01305403A EP 01305403 A EP01305403 A EP 01305403A EP 1168299 B1 EP1168299 B1 EP 1168299B1
Authority
EP
European Patent Office
Prior art keywords
triphone
database
cost
phoneme
preselection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP01305403A
Other languages
German (de)
French (fr)
Other versions
EP1168299A3 (en
EP1168299B8 (en
EP1168299A2 (en
Inventor
Alistair D. Conkie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Intellectual Property II LP
Original Assignee
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Corp filed Critical AT&T Corp
Publication of EP1168299A2 publication Critical patent/EP1168299A2/en
Publication of EP1168299A3 publication Critical patent/EP1168299A3/en
Application granted granted Critical
Publication of EP1168299B1 publication Critical patent/EP1168299B1/en
Publication of EP1168299B8 publication Critical patent/EP1168299B8/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/06Elementary speech units used in speech synthesisers; Concatenation rules
    • G10L13/07Concatenation rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/022Demisyllables, biphones or triphones being the recognition units

Definitions

  • the present invention relates to a system and method for increasing the speed of a unit selection synthesis system for concatenative speech synthesis and, more particularly, to predetermining a universe of phonemes - selected on the basis of their triphone context - that are potentially used in speech. Real-time selection is then performed from the created phoneme universe.
  • a current approach to concatenative speech synthesis is to use a very large database for recorded speech that has been segmented and labeled with prosodic and spectral characteristics, such as the fundamental frequency (F0) for voiced speech, the energy or gain of the signal, and the spectral distribution of the signal (i.e., how much of the signal is present at any given frequency).
  • the database contains multiple instances of speech sounds. This multiplicity permits the possibility of having units in the database that are much less stylized than would occur in a diphone database (a "diphone" being defined as the second half of one phoneme followed by the initial half of the following phoneme, a diphone database generally containing only one instance of any given diphone). Therefore, the possibility of achieving natural speech is enhanced with the "large database" approach.
  • this database technique relies on being able to select the "best" units from the database - that is, the units that are closest in character to the prosodic specification provided by the speech synthesis system, and that have a low spectral mismatch at the concatenation points between phonemes.
  • the "best" sequence of units may be determined by associating a numerical cost in two different ways. First, a "target cost" is associated with the individual units in isolation, where a lower cost is associated with a unit that has characteristics (e.g., F0, gain, spectral distribution) relatively close to the unit being synthesized, and a higher cost is associated with units having a higher discrepancy with the unit being synthesized.
  • characteristics e.g., F0, gain, spectral distribution
  • a second cost referred to as the "concatenation cost" is associated with how smoothly two contiguous units are joined together. For example, if the spectral mismatch between units is poor, perhaps even corresponding to an audible "click", there will be a higher concatenation cost.
  • a set of candidate units for each position in the desired sequence can be formulated, with associated target costs and concatenative costs. Estimating the best (lowest-cost) path through the network is then performed using a Viterbi search. The chosen units may then be concatenated to form one continuous signal, using a variety of different techniques.
  • Rapid Unit selection from a Large speech Corpus for Concatenative Speech Synthesis by M. Beutnagel et al., Proceedings of Eurospeech (1999 ), addresses the problem that precomputing and caching the concatenation costs of joining units is not practical owing to the large number of units. It proposes synthesising a large quantity of text, and logging the units actually selected in order to obtain usage statistics and construct a practical and efficient cache of concatenation costs.
  • a first aspect of the invention provides a method of creating a preselection cost database of triphones as defined in the appended claim 1.
  • a second aspect of the invention provides a method of synthesizing speech from an input text as defined in the appended claim 4, which makes use of a database generated according to the first aspect of the invention.
  • a third aspect of the invention provides a system for synthesizing speech using phonemes as defined in the appended claim 7.
  • the present invention relates to a system and method for increasing the speed of a unit selection synthesis system for concatenative speech and, more particularly, to predetermining a universe of phonemes in the speech database, selected on the basis of their triphone context, that are potentially used in speech, and performing real-time selection from this precalculated phoneme universe.
  • a triphone database is created where for any given triphone context required for synthesis, there is a complete list, precalculated, of all the units (phonemes) in the database that can possibly be used in that triphone context.
  • this list is (in most cases) a significantly smaller set of candidates units than the complete set of units of that phoneme type.
  • the number of phonemes in the preselection list will vary and may, at one extreme, include all possible phonemes of a particular type. There may also arise a situation where the unit to be synthesized (plus context) does not match any of the precalculated triphones. In this case, the conventional single phoneme approach of the prior art may be employed, using the complete set of phonemes of a given type. It is presumed that these instances will be relatively infrequent.
  • System 100 includes a text-to-speech synthesizer 104 that is connected to a data source 102 through an input link 108, and is likewise connected to a data sink 106 through an output link 110.
  • Text-to-speech synthesizer 104 functions to convert the text data either to speech data or physical speech.
  • synthesizer 104 converts the text data by first converting the text into a stream of phonemes representing the speech equivalent of the text, then processes the phoneme stream to produce an acoustic unit stream representing a clearer and more understandable speech representation.
  • Synthesizer 104 then converts the acoustic unit stream to speech data or physical speech.
  • database units phonemes accessed according to their triphone context, are processed to speed up the unit selection process.
  • Data source 102 provides text-to-speech synthesizer 104, via input link 108, the data that represents the text to be synthesized.
  • the data representing the text of the speech can be in any format, such as binary, ASCII, or a word processing file.
  • Data source 102 can be any one of a number of different types of data sources, such as a computer, a storage device, or any combination of software and hardware capable of generating, relaying, or recalling from storage, a textual message or any information capable of being translated into speech.
  • Data sink 106 receives the synthesized speech from text to-speech synthesizer 104 via output link 110.
  • Data sink 106 can be any device capable of audibly outputting speech, such as a speaker system for transmitting mechanical sound waves, or a digital computer, or any combination of hardware and software capable of receiving, relaying, storing, sensing or perceiving speech sound or information representing speech sounds.
  • Links 108 and 110 can be any suitable device or system for connecting data source 102/data sink 106 to synthesizer 104. Such devices include a direct serial/parallel cable connection, a connection over a wide area network (WAN) or a local area network (LAN), a connection over an intranet, the Internet, or any other distributed processing network or system. Additionally, input link 108 or output link 110 may be software devices linking various software systems.
  • WAN wide area network
  • LAN local area network
  • input link 108 or output link 110 may be software devices linking various software systems.
  • FIG. 2 contains a more detailed block diagram of text-to-speech synthesizer 104 of FIG. 1 .
  • Synthesizer 104 comprises, in this exemplary embodiment, a text normalization device 202, syntactic parser device 204, word pronunciation module 206, prosody generation device 208, an acoustic unit selection device 210, and a speech synthesis back-end device 212.
  • textual data is received on input link 108 and first applied as an input to text normalization device 202.
  • Text normalization device 202 parses the text data into known words and further converts abbreviations and numbers into words to produce a corresponding set of normalized textual data.
  • syntactic parser 204 performs grarnmatical analysis of a sentence to identify the syntactic structure of each constituent phrase and word. For example, syntactic parser 204 will identify a particular phrase as a "noun phrase” or a "verb phrase” and a word as a noun, verb, adjective, etc. Syntactic parsing is important because whether the word or phrase is being used as a noun or a verb may affect how it is articulated.
  • speech synthesizer 104 may assign the word " cat " a different sound duration and intonation pattern than " ran " because of its position and function in the sentence structure.
  • word pronunciation module 206 orthographic characters used in the normal text are mapped into the appropriate strings of phonetic segments representing units of sound and speech. This is important since the same orthographic strings may have different pronunciations depending on the word in which the string is used. For example, the orthographic string “gh” is translated to the phoneme /f/in “tough”, to the phoneme /g/ in “ghost”, and is not directly realized as any phoneme in "though”. Lexical stress is also marked. For example, "record” has a primary stress on the first syllable if it is a noun, but has the primary stress on the second syllable if it is a verb.
  • Prosody determination device 208 assigns patterns of timing and intonation to the phonetic segment strings.
  • the timing pattern includes the duration of sound for each of the phonemes. For example, the "re” in the verb "record” has a longer duration of sound than the "re” in the noun “record”.
  • the intonation pattern concerning pitch changes during the course of an utterance. These pitch changes express accentuation of certain words or syllables as they are positioned in a sentence and help convey the meaning of the sentence.
  • the patterns of timing and intonation are important for the intelligibility and naturalness of synthesized speech.
  • Prosody may be generated in various ways including assigning an artificial accent or providing for sentence context. For example, the phrase “ This is a test! will be spoken differently from “This is a test? ".
  • Prosody generating devices are well-known to those of ordinary skill in the art and any combination of hardware, software, firmware, heuristic techniques, databases, or any other apparatus or method that performs prosody generation may be used.
  • the phonetic output and accompanying prosodic specification from prosody determination device 208 is then converted, using any suitable, well-known technique, into unit (phoneme) specifications.
  • the phoneme data is then sent to acoustic unit selection device 210 where the phonemes and characteristic parameters are transformed into a stream of acoustic units that represent speech.
  • An "acoustic unit" can be defined as a particular utterance of a given phoneme. Large numbers of acoustic units, as discussed below in association with FIG. 3 , may all correspond to a single phoneme, each acoustic unit differing from one another in terms of pitch, duration, and stress (as well as other phonetic or prosodic qualities).
  • a triphone preselection cost database 214 is accessed by unit selection device 210 to provide a candidate list of units, based on a triphone context, that are most likely to be used in the synthesis process.
  • Unit selection device 210 then performs a search on this candidate list (using a Viterbi search, for example), to find the "least cost" unit that best matches the phoneme to be synthesized.
  • the acoustic unit stream output from unit selection device 210 is then sent to speech synthesis back-end device 212 which converts the acoustic unit stream into speech data and transmits (referring to FIG. 1 ) the speech data to data sink 106 over output link 110.
  • FIG. 3 contains an example of a phoneme string 302-310 for the word " cat " with an associated set of characteristic parameters 312 - 320 (for example, F0, duration, etc.) assigned, respectively, to each phoneme and a separate list of acoustic unit groups 322, 324 and 326 for each utterance.
  • Each acoustic unit group includes at least one acoustic unit 328 and each acoustic unit 328 includes an associated target cost 330, as defined above.
  • a concatenation cost 332, as represented by the arrow in FIG. 3 is assigned between each acoustic unit 328 in a given group and an acoustic units 332 of the immediately subsequent group.
  • the unit selection process was performed on a phoneme-by-phoneme basis (or, in more robust systems, on half-phoneme - by - half-phoneme basis) for every instance of each unit contained in the speech database.
  • each of its acoustic unit realizations 328 in speech database 324 would be processed to determine the individual target costs 330, compared to the text to be synthesized.
  • phoneme-by-phoneme processing (during run time) would also be required for /k/ phoneme 304 and /t/ phoneme 308. Since there are many occasions of the phoneme / ⁇ / that would not be preceded by /k/ and/or followed by /t/, there were many target costs in the prior art systems that were likely to be unnecessarily calculated.
  • a "triphone" database (illustrated as database 214 in FIG. 2 ) is created where lists of units (phonemes) that might be used in any given triphone context are stored (and indexed using a triphone-based key) and can be accessed during the process of unit selection. For the English language, there are approximately 10,000 common triphones, so the creation of such a database is not an insurmountable task.
  • each possible / ⁇ / in the database is examined to determine how well it (and the surrounding phonemes that occur in the speech from which it was extracted) matches the synthesis specifications, as shown in FIG. 4 .
  • all possible costs can be examined that may be calculated at run-time for a particular phoneme in a triphone context.
  • N the number of possible costs
  • the preselection cost for every possible 5-phone combination u a - u 1 - u 2 - u 3 - u b that contains this triphone is calculated. It is to be noted that this process is also useful in systems that utilize half-phonemes, as long as "phoneme" spacing is maintained in creating each triphone cost that is calculated.
  • one sequence would be k 1 - oe 1 - t 1 and another would 1be k 2 - oe 2 - t 2 .
  • This unit spacing is used to avoid including redundant information in the cost functions (since the identity of one of the adjacent half-phones is already a known quantity).
  • the costs for all sequences u a - k 1 - oe 1 - u b are calculated, where u a and u b are allowed to vary over the entire phoneme set.
  • the costs for all sequences u a - k 2 - oe 2 - t 2 - u b are calculated, and so on for each possible triphone sequence.
  • the purpose of calculating the costs offline is solely to determine which units can potentially play a role in the subsequent synthesis, and which can be safely ignored. It is to be noted that the specific relevant costs are re-calculated at synthesis time. This re-calculation is necessary, since a component of the cost is dependent on knowledge of the particular synthesis specification, available only at run time.
  • Pr ⁇ electSet u 1 u 2 u 3 ⁇ a ⁇ PH ⁇ ⁇ b ⁇ PH ⁇ C ⁇ C n u a u 1 u 2 u 3 u b
  • CC n is a function for calculating the set of units with the lowest n context costs
  • CC n is a function which calculated the n-best matching units in the database for the given context.
  • PH is defined as the set of unit types.
  • the value of "n” refers to the minimum number of candidates that are needed for any given sequence of the form u a - u 1 - u 2 - u 3 - u b .
  • FIG. 5 shows, in simplified form, a flowchart illustrating the process used to populate the triphone cost database used in the system of the present invention.
  • the process is initiated at block 500 and selects a first triphone u 1 - u 2 - u 3 (block 502) for which preselection costs will be calculated.
  • the process then proceeds to block 504 which selects a first pair of phonemes to be to the "left" u a and "right" u b phonemes of the previously selected triphone.
  • the concatenation costs associated with this 5-phone grouping are calculated (block 506) and stored in a database with this particular triphone identity (block 508).
  • the process stops and the triphone database is defined as completed. Otherwise, the process returns to step 502 and selects another triphone for evaluation, using the same method. The process will continue until all possible triphone combinations have been reviewed and the costs calculated. It is an advantage of the present invention that this process is performed only once, prior to "run time", so that during the actual synthesis process (as illustrated in FIG. 6 ), the unit selection process uses this created triphone database.
  • FIG. 6 is a flowchart of an exemplary speech synthesis system.
  • a first step is to receive the input text (block 610) and apply it (block 620) as an input to text normalization device 202 (as shown in FIG. 2 ).
  • the normalized text is then syntactically parsed (block 630) so that the syntactic structure of each constituent phrase or word is identified as, for example, a noun, verb, adjective, etc.
  • the syntactically parsed text is then converted to a phoneme-based representation (block 640), where these phonemes are then applied as inputs to a unit (phoneme) selection module, such as unit selection device 210 discussed in detail above in association with FIG. 2 .
  • a preselection triphone database 214 such as that generated by following the steps as outlined in FIG. 5 is added to the configuration. Where a match is found with a triphone key in the database, the prior art process of assessing every possible candidate of a particular unit (phoneme) type is replaced by the inventive process of assessing the shower, precalculated list related to the triphone key.
  • a candidate list of each requested unit is generated and a Viterbi search is performed (block 650) to find the lowest cost path through the selected phonemes. The selected phonemes may be then be further processed (block 660) to form the actual speech output.

Description

    Technical Field
  • The present invention relates to a system and method for increasing the speed of a unit selection synthesis system for concatenative speech synthesis and, more particularly, to predetermining a universe of phonemes - selected on the basis of their triphone context - that are potentially used in speech. Real-time selection is then performed from the created phoneme universe.
  • Background of the Invention
  • A current approach to concatenative speech synthesis is to use a very large database for recorded speech that has been segmented and labeled with prosodic and spectral characteristics, such as the fundamental frequency (F0) for voiced speech, the energy or gain of the signal, and the spectral distribution of the signal (i.e., how much of the signal is present at any given frequency). The database contains multiple instances of speech sounds. This multiplicity permits the possibility of having units in the database that are much less stylized than would occur in a diphone database (a "diphone" being defined as the second half of one phoneme followed by the initial half of the following phoneme, a diphone database generally containing only one instance of any given diphone). Therefore, the possibility of achieving natural speech is enhanced with the "large database" approach.
  • For good quality synthesis, this database technique relies on being able to select the "best" units from the database - that is, the units that are closest in character to the prosodic specification provided by the speech synthesis system, and that have a low spectral mismatch at the concatenation points between phonemes. The "best" sequence of units may be determined by associating a numerical cost in two different ways. First, a "target cost" is associated with the individual units in isolation, where a lower cost is associated with a unit that has characteristics (e.g., F0, gain, spectral distribution) relatively close to the unit being synthesized, and a higher cost is associated with units having a higher discrepancy with the unit being synthesized. A second cost, referred to as the "concatenation cost", is associated with how smoothly two contiguous units are joined together. For example, if the spectral mismatch between units is poor, perhaps even corresponding to an audible "click", there will be a higher concatenation cost.
  • Thus, a set of candidate units for each position in the desired sequence can be formulated, with associated target costs and concatenative costs. Estimating the best (lowest-cost) path through the network is then performed using a Viterbi search. The chosen units may then be concatenated to form one continuous signal, using a variety of different techniques.
  • While such database-driven systems may produce a more natural sounding voice quality, to do so they require a great deal of computational resources during the synthesis process. Accordingly, there remains a need for new methods and systems that provide natural voice quality in speech synthesis while reducing the computational requirements.
  • "Rapid Unit selection from a Large speech Corpus for Concatenative Speech Synthesis" by M. Beutnagel et al., Proceedings of Eurospeech (1999), addresses the problem that precomputing and caching the concatenation costs of joining units is not practical owing to the large number of units. It proposes synthesising a large quantity of text, and logging the units actually selected in order to obtain usage statistics and construct a practical and efficient cache of concatenation costs.
  • Yet, "Automatic generation of synthesis units for trainable text-to-speech systems", by H. Hon et al., Proceedings of the IEEE Conference in Acoustics, Speech and Signal Processing (ICASSP), 12 May 1998, addresses the problem of synthesising speech using a decision tree, that is, a database, of context dependent phones that could be triphones or quinphones (a phone with two immediate left and right contexts).
  • Summary of the Invention
  • A first aspect of the invention provides a method of creating a preselection cost database of triphones as defined in the appended claim 1.
  • A second aspect of the invention provides a method of synthesizing speech from an input text as defined in the appended claim 4, which makes use of a database generated according to the first aspect of the invention.
  • A third aspect of the invention provides a system for synthesizing speech using phonemes as defined in the appended claim 7.
  • The need remaining in the prior art is addressed by the present invention, which relates to a system and method for increasing the speed of a unit selection synthesis system for concatenative speech and, more particularly, to predetermining a universe of phonemes in the speech database, selected on the basis of their triphone context, that are potentially used in speech, and performing real-time selection from this precalculated phoneme universe.
  • In accordance with the present invention, a triphone database is created where for any given triphone context required for synthesis, there is a complete list, precalculated, of all the units (phonemes) in the database that can possibly be used in that triphone context. Advantageously, this list is (in most cases) a significantly smaller set of candidates units than the complete set of units of that phoneme type. By ignoring units that are guaranteed not to be used in the given triphone context, the selection process speed is significantly increased. It has also been found that speech quality is not compromised with the unit selection process of the present invention.
  • Depending upon the unit required for synthesis, as well as the surrounding phoneme context, the number of phonemes in the preselection list will vary and may, at one extreme, include all possible phonemes of a particular type. There may also arise a situation where the unit to be synthesized (plus context) does not match any of the precalculated triphones. In this case, the conventional single phoneme approach of the prior art may be employed, using the complete set of phonemes of a given type. It is presumed that these instances will be relatively infrequent.
  • Other and further aspects of the present invention will become apparent during the course of the following discussion and by reference to the accompanying drawings.
  • Brief Description of the Drawings Referring now to the drawings,
    • FIG. 1 illustrates an exemplary speech synthesis system for utilizing the unit (e.g., phoneme) selection arrangement of the present invention;
    • FIG. 2 illustrates, in more detail, an exemplary text-to-speech synthesizer that may be used in the system of FIG. 1;
    • FIG. 3 illustrates an exemplary "phoneme" sequence and the various costs associated with this sequence;
    • FIG. 4 contains an illustration of an exemplary unit (phoneme) database useful as the unit selection database in the system of FIG. 1;
    • FIG. 5 is a flowchart illustrating the triphone cost precalculation process of the present invention, where the top N units are selected on the basis of cost (the top 50 units for any 5-phone sequence containing a given triphone being guaranteed to be present); and
    • FIG. 6 is a flowchart illustrating the unit (phoneme) selection process of the present invention, utilizing the precalculated triphone-indexed list of units (phonemes).
    Detailed Description
  • An exemplary speech synthesis system 100 is illustrated in FIG. 1. System 100 includes a text-to-speech synthesizer 104 that is connected to a data source 102 through an input link 108, and is likewise connected to a data sink 106 through an output link 110. Text-to-speech synthesizer 104, as discussed in detail below in association with FIG. 2, functions to convert the text data either to speech data or physical speech. In operation, synthesizer 104 converts the text data by first converting the text into a stream of phonemes representing the speech equivalent of the text, then processes the phoneme stream to produce an acoustic unit stream representing a clearer and more understandable speech representation. Synthesizer 104 then converts the acoustic unit stream to speech data or physical speech. In accordance with the teachings of the present invention, as discussed in detail below, database units (phonemes) accessed according to their triphone context, are processed to speed up the unit selection process.
  • Data source 102 provides text-to-speech synthesizer 104, via input link 108, the data that represents the text to be synthesized. The data representing the text of the speech can be in any format, such as binary, ASCII, or a word processing file. Data source 102 can be any one of a number of different types of data sources, such as a computer, a storage device, or any combination of software and hardware capable of generating, relaying, or recalling from storage, a textual message or any information capable of being translated into speech. Data sink 106 receives the synthesized speech from text to-speech synthesizer 104 via output link 110. Data sink 106 can be any device capable of audibly outputting speech, such as a speaker system for transmitting mechanical sound waves, or a digital computer, or any combination of hardware and software capable of receiving, relaying, storing, sensing or perceiving speech sound or information representing speech sounds.
  • Links 108 and 110 can be any suitable device or system for connecting data source 102/data sink 106 to synthesizer 104. Such devices include a direct serial/parallel cable connection, a connection over a wide area network (WAN) or a local area network (LAN), a connection over an intranet, the Internet, or any other distributed processing network or system. Additionally, input link 108 or output link 110 may be software devices linking various software systems.
  • FIG. 2 contains a more detailed block diagram of text-to-speech synthesizer 104 of FIG. 1. Synthesizer 104 comprises, in this exemplary embodiment, a text normalization device 202, syntactic parser device 204, word pronunciation module 206, prosody generation device 208, an acoustic unit selection device 210, and a speech synthesis back-end device 212. In operation, textual data is received on input link 108 and first applied as an input to text normalization device 202. Text normalization device 202 parses the text data into known words and further converts abbreviations and numbers into words to produce a corresponding set of normalized textual data. For example, if "St." is input, text normalization device 202 is used to pronounce the abbreviation as either "saint" or "street", but not the /st/ sound. Once the text has been normalized, it is input to syntactic parser 204. Syntactic processor 204 performs grarnmatical analysis of a sentence to identify the syntactic structure of each constituent phrase and word. For example, syntactic parser 204 will identify a particular phrase as a "noun phrase" or a "verb phrase" and a word as a noun, verb, adjective, etc. Syntactic parsing is important because whether the word or phrase is being used as a noun or a verb may affect how it is articulated. For example, in the sentence "the cat ran away", if "cat" is identified as a noun and "ran" is identified as a verb, speech synthesizer 104 may assign the word "cat" a different sound duration and intonation pattern than "ran" because of its position and function in the sentence structure.
  • Once the syntactic structure of the text has been determined, the text is input to word pronunciation module 206. In word pronunciation module 206, orthographic characters used in the normal text are mapped into the appropriate strings of phonetic segments representing units of sound and speech. This is important since the same orthographic strings may have different pronunciations depending on the word in which the string is used. For example, the orthographic string "gh" is translated to the phoneme /f/in "tough", to the phoneme /g/ in "ghost", and is not directly realized as any phoneme in "though". Lexical stress is also marked. For example, "record" has a primary stress on the first syllable if it is a noun, but has the primary stress on the second syllable if it is a verb. The output from word pronunciation module 206, in the form of phonetic segments, is then applied as an input to prosody determination device 208. Prosody determination device 208 assigns patterns of timing and intonation to the phonetic segment strings. The timing pattern includes the duration of sound for each of the phonemes. For example, the "re" in the verb "record" has a longer duration of sound than the "re" in the noun "record". Furthermore, the intonation pattern concerning pitch changes during the course of an utterance. These pitch changes express accentuation of certain words or syllables as they are positioned in a sentence and help convey the meaning of the sentence. Thus, the patterns of timing and intonation are important for the intelligibility and naturalness of synthesized speech. Prosody may be generated in various ways including assigning an artificial accent or providing for sentence context. For example, the phrase "This is a test!" will be spoken differently from "This is a test? ". Prosody generating devices are well-known to those of ordinary skill in the art and any combination of hardware, software, firmware, heuristic techniques, databases, or any other apparatus or method that performs prosody generation may be used. In accordance with the present invention, the phonetic output and accompanying prosodic specification from prosody determination device 208 is then converted, using any suitable, well-known technique, into unit (phoneme) specifications.
  • The phoneme data, along with the corresponding characteristic parameters, is then sent to acoustic unit selection device 210 where the phonemes and characteristic parameters are transformed into a stream of acoustic units that represent speech. An "acoustic unit" can be defined as a particular utterance of a given phoneme. Large numbers of acoustic units, as discussed below in association with FIG. 3, may all correspond to a single phoneme, each acoustic unit differing from one another in terms of pitch, duration, and stress (as well as other phonetic or prosodic qualities). In accordance with the present invention, a triphone preselection cost database 214 is accessed by unit selection device 210 to provide a candidate list of units, based on a triphone context, that are most likely to be used in the synthesis process. Unit selection device 210 then performs a search on this candidate list (using a Viterbi search, for example), to find the "least cost" unit that best matches the phoneme to be synthesized. The acoustic unit stream output from unit selection device 210 is then sent to speech synthesis back-end device 212 which converts the acoustic unit stream into speech data and transmits (referring to FIG. 1) the speech data to data sink 106 over output link 110.
  • FIG. 3 contains an example of a phoneme string 302-310 for the word "cat" with an associated set of characteristic parameters 312 - 320 (for example, F0, duration, etc.) assigned, respectively, to each phoneme and a separate list of acoustic unit groups 322, 324 and 326 for each utterance. Each acoustic unit group includes at least one acoustic unit 328 and each acoustic unit 328 includes an associated target cost 330, as defined above. A concatenation cost 332, as represented by the arrow in FIG. 3, is assigned between each acoustic unit 328 in a given group and an acoustic units 332 of the immediately subsequent group.
  • In the prior art, the unit selection process was performed on a phoneme-by-phoneme basis (or, in more robust systems, on half-phoneme - by - half-phoneme basis) for every instance of each unit contained in the speech database. Thus, when considering the /æ/ phoneme 306, each of its acoustic unit realizations 328 in speech database 324 would be processed to determine the individual target costs 330, compared to the text to be synthesized. Similarly, phoneme-by-phoneme processing (during run time) would also be required for /k/ phoneme 304 and /t/ phoneme 308. Since there are many occasions of the phoneme /æ/ that would not be preceded by /k/ and/or followed by /t/, there were many target costs in the prior art systems that were likely to be unnecessarily calculated.
  • In accordance with the present invention, it has been recognized that run-time calculation time can be significantly reduced by pre-computing the list of phoneme candidates from the speech database that can possibly be used in the final synthesis before beginning to work out target costs. To this end, a "triphone" database (illustrated as database 214 in FIG. 2) is created where lists of units (phonemes) that might be used in any given triphone context are stored (and indexed using a triphone-based key) and can be accessed during the process of unit selection. For the English language, there are approximately 10,000 common triphones, so the creation of such a database is not an insurmountable task. In particular, for the triphone /k/ - /æ/ - /t/, each possible /æ/ in the database is examined to determine how well it (and the surrounding phonemes that occur in the speech from which it was extracted) matches the synthesis specifications, as shown in FIG. 4. By then allowing the phonemes on either side of /k/ and /t/ to vary over the complete universe of phonemes, all possible costs can be examined that may be calculated at run-time for a particular phoneme in a triphone context. In particular, when synthesis is complete, only the N "best" units are retained for any 5-phoneme context (in terms of lowest concatenation cost; in one example N may be equal to 50). It is possible to "combine" (i.e., take the union of) the relevant units that have a particular triphone in common. Because of the way this calculation is arranged, the combination is guaranteed to be the list of all units that are relevant for this specific part of the synthesis.
  • In most cases, there will be number of units (i.e., specific instances of the phonemes) that will not occur in the union of possible all units, and therefore need never be considered in calculating the costs at run time. The preselection process of the present invention, therefore, results in increasing the speed of the selection process. In one instance, an increase of 100% has been achieved. It is to be presumed that if a particular triphone does not appear to have an associated list of units, the conventional unit cost selection process will be used.
  • In general, therefore, for any unit u2 that is to be synthesized as part of the triphone sequence u1 - u2 - u3 , the preselection cost for every possible 5-phone combination ua - u1 - u2 - u3 - ub that contains this triphone is calculated. It is to be noted that this process is also useful in systems that utilize half-phonemes, as long as "phoneme" spacing is maintained in creating each triphone cost that is calculated. Using the above example, one sequence would be k1 - oe1 - t1 and another would 1be k2 - oe2 - t2 . This unit spacing is used to avoid including redundant information in the cost functions (since the identity of one of the adjacent half-phones is already a known quantity). In accordance with the present invention, the costs for all sequences ua - k1 - oe1 - ub are calculated, where ua and ub are allowed to vary over the entire phoneme set. Similarly, the costs for all sequences ua - k2 - oe2 - t2 - ub are calculated, and so on for each possible triphone sequence. The purpose of calculating the costs offline is solely to determine which units can potentially play a role in the subsequent synthesis, and which can be safely ignored. It is to be noted that the specific relevant costs are re-calculated at synthesis time. This re-calculation is necessary, since a component of the cost is dependent on knowledge of the particular synthesis specification, available only at run time.
  • Formally, for each individual phoneme to be synthesized, a determination is first made to find a particular triphone context that is of interest. Following that, a determination is made with respect to which acoustic units are either within or outside of the acceptable cost limit for that triphone context. The union of all chosen 5-phone sequences is then performed and associated with the triphone to be synthesized. That is: Pr electSet u 1 u 2 u 3 = a PH b PH C C n u a u 1 u 2 u 3 u b
    Figure imgb0001
    where CCn is a function for calculating the set of units with the lowest n context costs and CCn is a function which calculated the n-best matching units in the database for the given context. PH is defined as the set of unit types. The value of "n" refers to the minimum number of candidates that are needed for any given sequence of the form ua - u1 - u2 - u3 - ub .
  • FIG. 5 shows, in simplified form, a flowchart illustrating the process used to populate the triphone cost database used in the system of the present invention. The process is initiated at block 500 and selects a first triphone u1 - u2 - u3 (block 502) for which preselection costs will be calculated. The process then proceeds to block 504 which selects a first pair of phonemes to be to the "left" ua and "right" ub phonemes of the previously selected triphone. The concatenation costs associated with this 5-phone grouping are calculated (block 506) and stored in a database with this particular triphone identity (block 508). The preselection costs for this particular triphone are calculated by varying phonemes ua and ub over the complete set of phonemes (block 510). Thus, a preselection cost will be calculated for the selected triphone in a 5-phoneme context. Once all possible 5-phoneme combinations of a selected triphone have been evaluated and a cost determined, the "best" are retained, with the proviso that for any arbitrary 5-phoneme context, the set is guaranteed to contain the top N units. The "best" units are defined as exhibiting the lowest target cost (block 512). In an exemplary embodiment, N=50. Once the "top 50" choices for a selected triphone have been stored in the triphone database, a check is made (block 514) to see if all possible triphone combinations have been evaluated. If so, the process stops and the triphone database is defined as completed. Otherwise, the process returns to step 502 and selects another triphone for evaluation, using the same method. The process will continue until all possible triphone combinations have been reviewed and the costs calculated. It is an advantage of the present invention that this process is performed only once, prior to "run time", so that during the actual synthesis process (as illustrated in FIG. 6), the unit selection process uses this created triphone database.
  • FIG. 6 is a flowchart of an exemplary speech synthesis system. At its initiation (block 600), a first step is to receive the input text (block 610) and apply it (block 620) as an input to text normalization device 202 (as shown in FIG. 2). The normalized text is then syntactically parsed (block 630) so that the syntactic structure of each constituent phrase or word is identified as, for example, a noun, verb, adjective, etc. The syntactically parsed text is then converted to a phoneme-based representation (block 640), where these phonemes are then applied as inputs to a unit (phoneme) selection module, such as unit selection device 210 discussed in detail above in association with FIG. 2. A preselection triphone database 214, such as that generated by following the steps as outlined in FIG. 5 is added to the configuration. Where a match is found with a triphone key in the database, the prior art process of assessing every possible candidate of a particular unit (phoneme) type is replaced by the inventive process of assessing the shower, precalculated list related to the triphone key. A candidate list of each requested unit is generated and a Viterbi search is performed (block 650) to find the lowest cost path through the selected phonemes. The selected phonemes may be then be further processed (block 660) to form the actual speech output.

Claims (8)

  1. A method of creating a preselection cost database of triphones to be used in speech synthesis, the method comprising the steps of:
    a) selecting (504) a predetermined triphone sequence u1 - u2 - u3;
    b) calculating (506) a preselection cost for each 5-phoneme sequence ua - u1 - u2 - u3 - ub , where u2 is allowed to match any identically labeled phoneme in the database and the units ua and ub vary over the entire phoneme universe;
    c) determining a plurality ofN least cost database units for the particular 5-phoneme context;
    d) performing the union of the plurality ofN least cost database units determined in step c);
    e) storing (508) the union created in step d) in a triphone preselection cost database; and
    f) repeating steps a) - e) for each possible triphone sequence.
  2. The method as defined in claim 1 wherein in performing step d), a plurality of fifty least cost sequences and associated costs are stored.
  3. The method as defined in claim 1 wherein in performing step b), the preselection cost is the target cost or an element of the target cost.
  4. A method of synthesizing speech from an input text using phonemes, the method comprising the steps of:
    a) creating a triphone preselection cost database according to a method as defined in any one of claims 1 to 3, and generating a key to index each triphone in the database;
    b) retrieving a portion of the input text for synthesis as a phoneme sequence;
    c) comparing a retrieved phoneme, in context with its neighboring phonemes, with a plurality of N least cost triphone keys stored in the triphone preselection cost database;
    d) choosing (640), as candidates for synthesis, a list of units from the triphone preselection cost database that comprise a matching triphone key;
    e) repeating steps b) through d) for each phoneme in the input text;
    f) selecting the least cost path through the network of candidates;
    g) processing the phonemes selected in step f) into synthesized speech; and
    h) outputting (660) the synthesized speech to an output device.
  5. The method as defined in claim 4 wherein in performing step c), the following steps are performed:
    1) comparing the retrieved phoneme and its neighboring phonemes to a selected triphone preselection database key;
    2) if a match is found, retaining the unit associated with the triphone preselection database key as a candidate for synthesis, otherwise
    3) using the full list of phonemes of the same type as the retrieved phoneme as the candidate list; and
    4) repeating steps 1) - 3) for each appropriate triphone preselection database, key.
  6. The method as defined in claim 4 wherein in performing step f), a Viterbi search mechanism is used.
  7. A system (104) comprising means for performing a method as defined in claim 1, 2 or 3.
  8. A system (104) comprising means for performing a method as defined in claim 4, 5 or 6.
EP01305403A 2000-06-30 2001-06-21 Method and system for preselection of suitable units for concatenative speech Expired - Lifetime EP1168299B8 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/607,615 US6684187B1 (en) 2000-06-30 2000-06-30 Method and system for preselection of suitable units for concatenative speech
US607615 2009-10-28

Publications (4)

Publication Number Publication Date
EP1168299A2 EP1168299A2 (en) 2002-01-02
EP1168299A3 EP1168299A3 (en) 2002-10-23
EP1168299B1 true EP1168299B1 (en) 2012-11-21
EP1168299B8 EP1168299B8 (en) 2013-03-13

Family

ID=24433014

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01305403A Expired - Lifetime EP1168299B8 (en) 2000-06-30 2001-06-21 Method and system for preselection of suitable units for concatenative speech

Country Status (4)

Country Link
US (5) US6684187B1 (en)
EP (1) EP1168299B8 (en)
CA (1) CA2351988C (en)
MX (1) MXPA01006594A (en)

Families Citing this family (188)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7082396B1 (en) * 1999-04-30 2006-07-25 At&T Corp Methods and apparatus for rapid acoustic unit selection from a large speech corpus
US7369994B1 (en) 1999-04-30 2008-05-06 At&T Corp. Methods and apparatus for rapid acoustic unit selection from a large speech corpus
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US6684187B1 (en) 2000-06-30 2004-01-27 At&T Corp. Method and system for preselection of suitable units for concatenative speech
US6505158B1 (en) * 2000-07-05 2003-01-07 At&T Corp. Synthesis-based pre-selection of suitable units for concatenative speech
KR20030005222A (en) * 2001-01-10 2003-01-17 코닌클리케 필립스 일렉트로닉스 엔.브이. Coding
US6829581B2 (en) * 2001-07-31 2004-12-07 Matsushita Electric Industrial Co., Ltd. Method for prosody generation by unit selection from an imitation speech database
ITFI20010199A1 (en) 2001-10-22 2003-04-22 Riccardo Vieri SYSTEM AND METHOD TO TRANSFORM TEXTUAL COMMUNICATIONS INTO VOICE AND SEND THEM WITH AN INTERNET CONNECTION TO ANY TELEPHONE SYSTEM
US7353164B1 (en) 2002-09-13 2008-04-01 Apple Inc. Representation of orthography in a continuous vector space
US7047193B1 (en) * 2002-09-13 2006-05-16 Apple Computer, Inc. Unsupervised data-driven pronunciation modeling
TWI220511B (en) * 2003-09-12 2004-08-21 Ind Tech Res Inst An automatic speech segmentation and verification system and its method
US20050096909A1 (en) * 2003-10-29 2005-05-05 Raimo Bakis Systems and methods for expressive text-to-speech
CN100524457C (en) * 2004-05-31 2009-08-05 国际商业机器公司 Device and method for text-to-speech conversion and corpus adjustment
US7869999B2 (en) * 2004-08-11 2011-01-11 Nuance Communications, Inc. Systems and methods for selecting from multiple phonectic transcriptions for text-to-speech synthesis
WO2006050238A1 (en) * 2004-10-28 2006-05-11 Voice Signal Technologies, Inc. Codec-dependent unit selection for mobile devices
US7418389B2 (en) * 2005-01-11 2008-08-26 Microsoft Corporation Defining atom units between phone and syllable for TTS systems
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US7633076B2 (en) 2005-09-30 2009-12-15 Apple Inc. Automated response to and sensing of user activity in portable devices
US20070106513A1 (en) * 2005-11-10 2007-05-10 Boillot Marc A Method for facilitating text to speech synthesis using a differential vocoder
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US20080129520A1 (en) * 2006-12-01 2008-06-05 Apple Computer, Inc. Electronic device with enhanced audio feedback
JP4406440B2 (en) * 2007-03-29 2010-01-27 株式会社東芝 Speech synthesis apparatus, speech synthesis method and program
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US20090043583A1 (en) * 2007-08-08 2009-02-12 International Business Machines Corporation Dynamic modification of voice selection based on user specific factors
JP5238205B2 (en) * 2007-09-07 2013-07-17 ニュアンス コミュニケーションズ,インコーポレイテッド Speech synthesis system, program and method
US9053089B2 (en) * 2007-10-02 2015-06-09 Apple Inc. Part-of-speech tagging using latent analogy
US8620662B2 (en) 2007-11-20 2013-12-31 Apple Inc. Context-aware unit selection
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) * 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8065143B2 (en) 2008-02-22 2011-11-22 Apple Inc. Providing text input using speech data and non-speech data
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US8464150B2 (en) 2008-06-07 2013-06-11 Apple Inc. Automatic language identification for dynamic text processing
CN101605307A (en) * 2008-06-12 2009-12-16 深圳富泰宏精密工业有限公司 Test short message service (SMS) voice play system and method
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8768702B2 (en) 2008-09-05 2014-07-01 Apple Inc. Multi-tiered voice feedback in an electronic device
US8898568B2 (en) * 2008-09-09 2014-11-25 Apple Inc. Audio user interface
US8712776B2 (en) * 2008-09-29 2014-04-29 Apple Inc. Systems and methods for selective text to speech synthesis
US8583418B2 (en) 2008-09-29 2013-11-12 Apple Inc. Systems and methods of detecting language and natural language strings for text to speech synthesis
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US8862252B2 (en) 2009-01-30 2014-10-14 Apple Inc. Audio user interface for displayless electronic device
US8380507B2 (en) 2009-03-09 2013-02-19 Apple Inc. Systems and methods for determining the language to use for speech generated by a text to speech engine
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10540976B2 (en) * 2009-06-05 2020-01-21 Apple Inc. Contextual voice commands
US10255566B2 (en) 2011-06-03 2019-04-09 Apple Inc. Generating and processing task items that represent tasks to perform
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
JP5471858B2 (en) * 2009-07-02 2014-04-16 ヤマハ株式会社 Database generating apparatus for singing synthesis and pitch curve generating apparatus
US8805687B2 (en) * 2009-09-21 2014-08-12 At&T Intellectual Property I, L.P. System and method for generalized preselection for unit selection synthesis
US8682649B2 (en) * 2009-11-12 2014-03-25 Apple Inc. Sentiment prediction from textual data
US8600743B2 (en) * 2010-01-06 2013-12-03 Apple Inc. Noise profile determination for voice-related feature
US8381107B2 (en) 2010-01-13 2013-02-19 Apple Inc. Adaptive audio feedback system and method
US8311838B2 (en) 2010-01-13 2012-11-13 Apple Inc. Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
DE202011111062U1 (en) 2010-01-25 2019-02-19 Newvaluexchange Ltd. Device and system for a digital conversation management platform
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US8798998B2 (en) 2010-04-05 2014-08-05 Microsoft Corporation Pre-saved data compression for TTS concatenation cost
US8731931B2 (en) 2010-06-18 2014-05-20 At&T Intellectual Property I, L.P. System and method for unit selection text-to-speech using a modified Viterbi approach
US8713021B2 (en) 2010-07-07 2014-04-29 Apple Inc. Unsupervised document clustering using latent semantic density analysis
US8965768B2 (en) * 2010-08-06 2015-02-24 At&T Intellectual Property I, L.P. System and method for automatic detection of abnormal stress patterns in unit selection synthesis
US8719006B2 (en) 2010-08-27 2014-05-06 Apple Inc. Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis
US8719014B2 (en) 2010-09-27 2014-05-06 Apple Inc. Electronic device with text error correction based on voice recognition data
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10515147B2 (en) 2010-12-22 2019-12-24 Apple Inc. Using statistical language models for contextual lookup
US8781836B2 (en) 2011-02-22 2014-07-15 Apple Inc. Hearing assistance system for providing consistent human speech
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US9164983B2 (en) 2011-05-27 2015-10-20 Robert Bosch Gmbh Broad-coverage normalization system for social media language
US10672399B2 (en) 2011-06-03 2020-06-02 Apple Inc. Switching between text data and audio data based on a mapping
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US8812294B2 (en) 2011-06-21 2014-08-19 Apple Inc. Translating phrases from one language into another using an order-based set of declarative rules
US8706472B2 (en) 2011-08-11 2014-04-22 Apple Inc. Method for disambiguating multiple readings in language conversion
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US8762156B2 (en) 2011-09-28 2014-06-24 Apple Inc. Speech recognition repair using contextual information
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US8775442B2 (en) 2012-05-15 2014-07-08 Apple Inc. Semantic search using a single-source semantic model
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
WO2013185109A2 (en) 2012-06-08 2013-12-12 Apple Inc. Systems and methods for recognizing textual identifiers within a plurality of words
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
FR2993088B1 (en) * 2012-07-06 2014-07-18 Continental Automotive France METHOD AND SYSTEM FOR VOICE SYNTHESIS
US10169456B2 (en) * 2012-08-14 2019-01-01 International Business Machines Corporation Automatic determination of question in text and determination of candidate responses using data mining
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
US8935167B2 (en) 2012-09-25 2015-01-13 Apple Inc. Exemplar-based latent perceptual modeling for automatic speech recognition
KR20230137475A (en) 2013-02-07 2023-10-04 애플 인크. Voice trigger for a digital assistant
US9977779B2 (en) 2013-03-14 2018-05-22 Apple Inc. Automatic supplementation of word correction dictionaries
US10642574B2 (en) 2013-03-14 2020-05-05 Apple Inc. Device, method, and graphical user interface for outputting captions
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US9733821B2 (en) 2013-03-14 2017-08-15 Apple Inc. Voice control to diagnose inadvertent activation of accessibility features
US10572476B2 (en) 2013-03-14 2020-02-25 Apple Inc. Refining a search based on schedule items
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
AU2014251347B2 (en) 2013-03-15 2017-05-18 Apple Inc. Context-sensitive handling of interruptions
AU2014233517B2 (en) 2013-03-15 2017-05-25 Apple Inc. Training an at least partial voice command system
US10748529B1 (en) 2013-03-15 2020-08-18 Apple Inc. Voice activated device for use with a voice-based digital assistant
KR101857648B1 (en) 2013-03-15 2018-05-15 애플 인크. User training by intelligent digital assistant
WO2014144579A1 (en) 2013-03-15 2014-09-18 Apple Inc. System and method for updating an adaptive speech recognition model
US9928754B2 (en) * 2013-03-18 2018-03-27 Educational Testing Service Systems and methods for generating recitation items
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
EP3937002A1 (en) 2013-06-09 2022-01-12 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
AU2014278595B2 (en) 2013-06-13 2017-04-06 Apple Inc. System and method for emergency calls initiated by voice command
DE112014003653B4 (en) 2013-08-06 2024-04-18 Apple Inc. Automatically activate intelligent responses based on activities from remote devices
US8751236B1 (en) * 2013-10-23 2014-06-10 Google Inc. Devices and methods for speech unit reduction in text-to-speech synthesis systems
US20150149178A1 (en) * 2013-11-22 2015-05-28 At&T Intellectual Property I, L.P. System and method for data-driven intonation generation
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
AU2015266863B2 (en) 2014-05-30 2018-03-15 Apple Inc. Multi-command single utterance input method
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
CN105336322B (en) * 2015-09-30 2017-05-10 百度在线网络技术(北京)有限公司 Polyphone model training method, and speech synthesis method and device
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179588B1 (en) 2016-06-09 2019-02-22 Apple Inc. Intelligent automated assistant in a home environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US11699430B2 (en) * 2021-04-30 2023-07-11 International Business Machines Corporation Using speech to text data in training text to speech models

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55147697A (en) * 1979-05-07 1980-11-17 Sharp Kk Sound synthesizer
SE469576B (en) 1992-03-17 1993-07-26 Televerket PROCEDURE AND DEVICE FOR SYNTHESIS
JPH0695696A (en) * 1992-09-14 1994-04-08 Nippon Telegr & Teleph Corp <Ntt> Speech synthesis system
US5384893A (en) 1992-09-23 1995-01-24 Emerson & Stern Associates, Inc. Method and apparatus for speech synthesis based on prosodic analysis
EP0590173A1 (en) 1992-09-28 1994-04-06 International Business Machines Corporation Computer system for speech recognition
US6502074B1 (en) * 1993-08-04 2002-12-31 British Telecommunications Public Limited Company Synthesising speech by converting phonemes to digital waveforms
US5987412A (en) * 1993-08-04 1999-11-16 British Telecommunications Public Limited Company Synthesising speech by converting phonemes to digital waveforms
KR950704772A (en) * 1993-10-15 1995-11-20 데이비드 엠. 로젠블랫 A method for training a system, the resulting apparatus, and method of use
US5970454A (en) * 1993-12-16 1999-10-19 British Telecommunications Public Limited Company Synthesizing speech by converting phonemes to digital waveforms
US5794197A (en) * 1994-01-21 1998-08-11 Micrsoft Corporation Senone tree representation and evaluation
KR19980702608A (en) 1995-03-07 1998-08-05 에버쉐드마이클 Speech synthesizer
CA2221762C (en) * 1995-06-13 2002-08-20 British Telecommunications Public Limited Company Ideal phonetic unit duration adjustment for text-to-speech system
US5949961A (en) * 1995-07-19 1999-09-07 International Business Machines Corporation Word syllabification in speech synthesis system
US5913193A (en) 1996-04-30 1999-06-15 Microsoft Corporation Method and system of runtime acoustic unit selection for speech synthesis
US5937384A (en) 1996-05-01 1999-08-10 Microsoft Corporation Method and system for speech recognition using continuous density hidden Markov models
GB2313530B (en) 1996-05-15 1998-03-25 Atr Interpreting Telecommunica Speech synthesizer apparatus
US6366883B1 (en) 1996-05-15 2002-04-02 Atr Interpreting Telecommunications Concatenation of speech segments by use of a speech synthesizer
US5850629A (en) * 1996-09-09 1998-12-15 Matsushita Electric Industrial Co., Ltd. User interface controller for text-to-speech synthesizer
US5905972A (en) 1996-09-30 1999-05-18 Microsoft Corporation Prosodic databases holding fundamental frequency templates for use in speech synthesis
US6041300A (en) 1997-03-21 2000-03-21 International Business Machines Corporation System and method of using pre-enrolled speech sub-units for efficient speech synthesis
US5913194A (en) 1997-07-14 1999-06-15 Motorola, Inc. Method, device and system for using statistical information to reduce computation and memory requirements of a neural network based speech synthesis system
US6163769A (en) * 1997-10-02 2000-12-19 Microsoft Corporation Text-to-speech using clustered context-dependent phoneme-based units
US6304846B1 (en) 1997-10-22 2001-10-16 Texas Instruments Incorporated Singing voice synthesis
US6317712B1 (en) * 1998-02-03 2001-11-13 Texas Instruments Incorporated Method of phonetic modeling using acoustic decision tree
JP3884856B2 (en) * 1998-03-09 2007-02-21 キヤノン株式会社 Data generation apparatus for speech synthesis, speech synthesis apparatus and method thereof, and computer-readable memory
JP3481497B2 (en) 1998-04-29 2003-12-22 松下電器産業株式会社 Method and apparatus using a decision tree to generate and evaluate multiple pronunciations for spelled words
US6490563B2 (en) * 1998-08-17 2002-12-03 Microsoft Corporation Proofreading with text to speech feedback
US6173263B1 (en) * 1998-08-31 2001-01-09 At&T Corp. Method and system for performing concatenative speech synthesis using half-phonemes
JP2000075878A (en) * 1998-08-31 2000-03-14 Canon Inc Device and method for voice synthesis and storage medium
JP2002530703A (en) 1998-11-13 2002-09-17 ルノー・アンド・オスピー・スピーチ・プロダクツ・ナームローゼ・ベンノートシャープ Speech synthesis using concatenation of speech waveforms
US6253182B1 (en) 1998-11-24 2001-06-26 Microsoft Corporation Method and apparatus for speech synthesis with efficient spectral smoothing
US6684187B1 (en) * 2000-06-30 2004-01-27 At&T Corp. Method and system for preselection of suitable units for concatenative speech
US6505158B1 (en) * 2000-07-05 2003-01-07 At&T Corp. Synthesis-based pre-selection of suitable units for concatenative speech
US7266497B2 (en) * 2002-03-29 2007-09-04 At&T Corp. Automatic segmentation in speech synthesis
US7209882B1 (en) 2002-05-10 2007-04-24 At&T Corp. System and method for triphone-based unit selection for visual speech synthesis
US7289958B2 (en) 2003-10-07 2007-10-30 Texas Instruments Incorporated Automatic language independent triphone training using a phonetic table
US7223901B2 (en) * 2004-03-26 2007-05-29 The Board Of Regents Of The University Of Nebraska Soybean FGAM synthase promoters useful in nematode control
US7226497B2 (en) * 2004-11-30 2007-06-05 Ranco Incorporated Of Delaware Fanless building ventilator
US7912718B1 (en) * 2006-08-31 2011-03-22 At&T Intellectual Property Ii, L.P. Method and system for enhancing a speech database
US7983919B2 (en) * 2007-08-09 2011-07-19 At&T Intellectual Property Ii, L.P. System and method for performing speech synthesis with a cache of phoneme sequences

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BEUTNAGEL ET AL: "Rapid unit selection from a large speech corpus for concatenative speech synthesis", PROCEEDINGS EUROSPEECH, 5 September 1999 (1999-09-05), Budapest, pages 1 - 4, XP007001051 *
BHASKARARAO P ET AL: "Use of triphones for demisyllable-based speech synthesis", INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH & SIGNAL PROCESSING, ICASSP, 14 April 1991 (1991-04-14), pages 517 - 520, XP010043935 *
HOLZAPFEL ET AL: "A NONLINEAR UNIT SELECTION STRATEGY FOR CONCATENATIVE SPEECH SYNTHESIS BASED ON SYLLABLE", PROCEEDINGS ICSLP, 1 October 1998 (1998-10-01), pages 1 - 4, XP007000370 *
HON H ET AL: "Automatic generation of synthesis units for trainable text-to-speech systems", ACOUSTICS, SPEECH AND SIGNAL PROCESSING, 1998. PROCEEDINGS OF THE 1998 IEEE INTERNATIONAL CONFERENCE ON SEATTLE, WA, USA 12-15 MAY 1998, NEW YORK, NY, USA,IEEE, US, vol. 1, 12 May 1998 (1998-05-12), pages 293 - 296, XP010279159, ISBN: 978-0-7803-4428-0, DOI: 10.1109/ICASSP.1998.674425 *

Also Published As

Publication number Publication date
CA2351988C (en) 2007-07-24
EP1168299A3 (en) 2002-10-23
US7460997B1 (en) 2008-12-02
US8566099B2 (en) 2013-10-22
US6684187B1 (en) 2004-01-27
US20090094035A1 (en) 2009-04-09
EP1168299B8 (en) 2013-03-13
EP1168299A2 (en) 2002-01-02
MXPA01006594A (en) 2004-07-30
US8224645B2 (en) 2012-07-17
US20040093213A1 (en) 2004-05-13
US20130013312A1 (en) 2013-01-10
US7124083B2 (en) 2006-10-17
CA2351988A1 (en) 2001-12-30

Similar Documents

Publication Publication Date Title
EP1168299B1 (en) Method and system for preselection of suitable units for concatenative speech
US6505158B1 (en) Synthesis-based pre-selection of suitable units for concatenative speech
US6697780B1 (en) Method and apparatus for rapid acoustic unit selection from a large speech corpus
US6173263B1 (en) Method and system for performing concatenative speech synthesis using half-phonemes
Taylor Concept-to-speech synthesis by phonological structure matching
US9691376B2 (en) Concatenation cost in speech synthesis for acoustic unit sequential pair using hash table and default concatenation cost
US7869999B2 (en) Systems and methods for selecting from multiple phonectic transcriptions for text-to-speech synthesis
US10699695B1 (en) Text-to-speech (TTS) processing
US7082396B1 (en) Methods and apparatus for rapid acoustic unit selection from a large speech corpus
KR20010018064A (en) Apparatus and method for text-to-speech conversion using phonetic environment and intervening pause duration
EP1589524B1 (en) Method and device for speech synthesis
EP1640968A1 (en) Method and device for speech synthesis
JP2001331191A (en) Device and method for voice synthesis, portable terminal and program recording medium
GB2292235A (en) Word syllabification.
EP1638080A2 (en) A text-to-speech system and method

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17P Request for examination filed

Effective date: 20021112

AKX Designation fees paid

Designated state(s): DE FR GB IT

17Q First examination report despatched

Effective date: 20100203

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB IT

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 60147393

Country of ref document: DE

Effective date: 20130117

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: AT&T INTELLECTUAL PROPERTY II, L.P.

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147393

Country of ref document: DE

Owner name: ATT INTELLECTUAL PROPERTY II, L.P., ATLANTA, US

Free format text: FORMER OWNER: AT T CORP., NEW YORK, N.Y., US

Effective date: 20130227

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147393

Country of ref document: DE

Owner name: ATT INTELLECTUAL PROPERTY II, L.P., ATLANTA, US

Free format text: FORMER OWNER: ATT CORP., NEW YORK, N.Y., US

Effective date: 20121121

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147393

Country of ref document: DE

Owner name: AT&T INTELLECTUAL PROPERTY II, L.P., US

Free format text: FORMER OWNER: AT&T CORP., NEW YORK, US

Effective date: 20121121

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147393

Country of ref document: DE

Owner name: AT&T INTELLECTUAL PROPERTY II, L.P., US

Free format text: FORMER OWNER: AT & T CORP., NEW YORK, US

Effective date: 20130227

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147393

Country of ref document: DE

Owner name: AT&T INTELLECTUAL PROPERTY II, L.P., RENO, US

Free format text: FORMER OWNER: AT & T CORP., NEW YORK, N.Y., US

Effective date: 20130227

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147393

Country of ref document: DE

Owner name: AT&T INTELLECTUAL PROPERTY II, L.P., RENO, US

Free format text: FORMER OWNER: AT&T CORP., NEW YORK, N.Y., US

Effective date: 20121121

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147393

Country of ref document: DE

Owner name: AT&T INTELLECTUAL PROPERTY II, L.P., ATLANTA, US

Free format text: FORMER OWNER: AT & T CORP., NEW YORK, N.Y., US

Effective date: 20130227

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147393

Country of ref document: DE

Owner name: AT&T INTELLECTUAL PROPERTY II, L.P., ATLANTA, US

Free format text: FORMER OWNER: AT&T CORP., NEW YORK, N.Y., US

Effective date: 20121121

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20130822

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 60147393

Country of ref document: DE

Effective date: 20130822

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 17

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147393

Country of ref document: DE

Owner name: ATT INTELLECTUAL PROPERTY II, L.P., ATLANTA, US

Free format text: FORMER OWNER: ATT INTELLECTUAL PROPERTY II, L.P., RENO, NEV., US

Ref country code: DE

Ref legal event code: R082

Ref document number: 60147393

Country of ref document: DE

Representative=s name: MARKS & CLERK (LUXEMBOURG) LLP, LU

Ref country code: DE

Ref legal event code: R081

Ref document number: 60147393

Country of ref document: DE

Owner name: AT&T INTELLECTUAL PROPERTY II, L.P., ATLANTA, US

Free format text: FORMER OWNER: AT&T INTELLECTUAL PROPERTY II, L.P., RENO, NEV., US

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20180622

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20190625

Year of fee payment: 19

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20190830

Year of fee payment: 19

Ref country code: GB

Payment date: 20190627

Year of fee payment: 19

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190621

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 60147393

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20200621

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200630

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200621

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210101