US20050209855A1 - Speech signal processing apparatus and method, and storage medium - Google Patents

Speech signal processing apparatus and method, and storage medium Download PDF

Info

Publication number
US20050209855A1
US20050209855A1 US11/126,372 US12637205A US2005209855A1 US 20050209855 A1 US20050209855 A1 US 20050209855A1 US 12637205 A US12637205 A US 12637205A US 2005209855 A1 US2005209855 A1 US 2005209855A1
Authority
US
United States
Prior art keywords
speech
segment
information indicating
recognition
segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/126,372
Inventor
Yasuo Okutani
Yasuhiro Komori
Toshiaki Fukada
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to US11/126,372 priority Critical patent/US20050209855A1/en
Publication of US20050209855A1 publication Critical patent/US20050209855A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking

Definitions

  • the present invention relates to a speech signal processing apparatus and method for forming a segment dictionary used in speech synthesis, and a storage medium.
  • the present invention has been made in consideration of the aforementioned prior art, and has as its object to provide a speech signal processing apparatus and method, which make segment recognition using HMM and register a speech segment in a dictionary in accordance with the recognition result, and a storage medium.
  • FIG. 1 is a block diagram showing the hardware arrangement of a speech synthesis apparatus according to an embodiment of the present invention
  • FIG. 2 is a block diagram showing the module arrangement of a speech synthesis apparatus according to the first embodiment of the present invention
  • FIG. 3 is a flow chart showing the flow of processing in an on-line module according to the first embodiment
  • FIG. 4 is a block diagram showing the detailed arrangement of an off-line module according to the first embodiment
  • FIG. 5 is a flow chart showing the flow of processing in the off-line module according to the first embodiment
  • FIG. 6 shows the format of a table that stores error recognition allowable patterns according to the third embodiment of the present invention.
  • FIG. 7 is a flow chart showing the flow of processing in an off-line module according to the third embodiment of the present invention.
  • FIG. 1 is a block diagram showing the hardware arrangement of a speech synthesis apparatus according to an embodiment of the present invention. Note that this embodiment will exemplify a case wherein a general personal computer is used as a speech synthesis apparatus, but the present invention can be practiced using a dedicated speech synthesis apparatus or other apparatuses.
  • reference numeral 101 denotes a control memory (ROM) which stores various control data used by a central processing unit (CPU) 102 .
  • the CPU 102 controls the operation of the overall apparatus by executing a control program stored in a RAM 103 .
  • Reference numeral 103 denotes a memory (RAM) which is used as a work area upon execution of various control processes by the CPU 102 to temporarily save various data, and loads and stores a control program from an external storage device 104 upon executing various processes by the CPU 102 .
  • This external storage device includes, e.g., a hard disk, CD-ROM, or the like.
  • Reference numeral 105 denotes a D/A converter for converting input digital data that represents a speech signal into an analog signal by outputting the analog signal to a loudspeaker 109 .
  • Reference numeral 106 denotes an input unit which comprises, e.g., a keyboard and a pointing device such as a mouse or the like, which are operated by the operator.
  • Reference numeral 107 denotes a display unit which comprises a CRT display, liquid crystal display, or the like.
  • Reference numeral 108 denotes a bus which connects those units.
  • Reference numeral 110 denotes a speech synthesis unit.
  • a control program for controlling the speech synthesis unit 110 of this embodiment is loaded from the external storage device 104 , and is stored on the RAM 103 .
  • Various data used by this control program are stored in the control memory 101 . Those data are fetched onto the memory 103 as needed via the bus 108 under the control of the CPU 102 , and are used in the control processes of the CPU 102 .
  • the D/A converter 105 converts speech waveform data produced by executing the control program into an analog signal, and outputs the analog signal to the loudspeaker 109 .
  • FIG. 2 is a block diagram showing the module arrangement of the speech synthesis unit 110 according to this embodiment.
  • the speech synthesis unit 110 roughly has two modules, i.e., a segment dictionary formation module 2000 for executing a process for registering speech segments in a segment dictionary 206 , and a speech synthesis module 2001 for receiving text data, and executing a process for synthesizing and outputting speech corresponding to that text data.
  • reference numeral 201 denotes a text input unit for receiving arbitrary text data from the input unit 106 or external storage device 104 ; 202 , an analysis dictionary; 203 , a language analyzer; 204 , a prosody generation rule holding unit; 205 , a prosody generator; 206 , a segment dictionary; 207 , a speech segment selector; 208 , a speech segment modification/concatenation unit for modifying speech segments using PSOLA (Pitch Synchronous Overlap and Add); 209 , a speech waveform output unit; 210 , a speech database; and 211 , a segment dictionary formation unit.
  • PSOLA Packet Synchronous Overlap and Add
  • the language analyzer 203 executes language analysis of text input from the text input unit 201 by looking up the analysis dictionary 202 .
  • the analysis result is input to the prosody generator 205 .
  • the prosody generator 205 generates a phoneme and prosody information on the basis of the analysis result of the language analyzer 203 and information that pertains to prosody generation rules held in the prosody generation rule holding unit 204 , and outputs them to the speech segment selector 207 and speech segment modification/concatenation unit 208 .
  • the speech segment selector 207 selects corresponding speech segments from those held in the segment dictionary 206 using the prosody generation result input from the prosody generator 205 .
  • the speech segment modification/concatenation unit 208 modifies and concatenates speech segments output from the speech segment selector 207 in accordance with the prosody generation result input from the prosody generator 205 to generate a speech waveform.
  • the generated speech waveform is output by the speech waveform output unit 209 .
  • the segment dictionary formation module 2000 will be explained below.
  • the segment dictionary formation unit 211 selects speech segments from the speech database 210 and registers them in the segment dictionary 206 on the basis of a procedure to be described later.
  • FIG. 3 is a flow chart showing the flow of a speech synthesis process (on-line process) in the speech synthesis module 2001 shown in FIG. 2 .
  • step S 301 the text input unit 201 inputs text data in units of sentences, clauses, words, or the like, and the flow advances to step S 302 .
  • step S 302 the language analyzer 203 executes language analysis of the text data.
  • the flow advances to step S 303 , and the prosody generator 205 generates a phoneme and prosody information on the basis of the analysis result obtained in step S 302 , and predetermined prosodic rules.
  • step S 304 and the speech segment selector 207 selects for each phoneme speech segments registered in the segment dictionary 206 on the basis of the prosody information obtained in step S 303 and a predetermined phonetic environment.
  • step S 305 the speech segment modification/concatenation unit 208 modifies and concatenates speech segments on the basis of the selected speech segments and the prosody information generated in step S 303 .
  • step S 306 the speech waveform output unit 209 outputs a speech waveform produced by the speech segment modification/concatenation unit 208 as a speech signal. In this way, synthetic speech corresponding to the input text is output.
  • FIG. 4 is a block diagram showing the more detailed arrangement of the segment dictionary formation module 2000 in FIG. 2 .
  • the same reference numerals in FIG. 4 denote the same parts as in FIG. 2 , and FIG. 4 shows the arrangement of the segment dictionary formation unit 211 as a characteristic feature of this embodiment in more detail.
  • reference numeral 401 denotes a speech segment search unit; 402 , a speech segment holding unit; 403 , a HMM learning unit; 404 , a HMM holding unit; 405 , a segment recognition unit; 406 , a recognition result holding unit; 407 , a registration segment determination unit; and 408 , a registration segment holding unit.
  • reference numeral 210 denotes the speech database shown in FIG. 2 .
  • the speech segment search unit 401 searches the speech database 210 for speech segments that satisfy a predetermined phonetic environment. In this case, a plurality of speech segments are found.
  • the speech segment holding unit 402 holds these found speech segments.
  • the HMM learning unit 403 computes the cepstra of the speech segments held in the speech segment holding unit 402 by computing, e.g., the Fourier transforms of waveforms of these speech segments, and computes and outputs the HMMs of phonemes on the basis of the computation results.
  • the HMM holding unit 404 holds learning results (HMMs) in units of phonemes.
  • the segment recognition unit 405 makes segment recognition of all speech segments used in learning of HMMs using the learned HMMs to obtain a HMM with a maximum likelihood (maximum likelihood HMM).
  • the recognition result holding unit 406 holds that segment recognition result.
  • the registration segment determination unit 407 adopts only a speech segment for which segment recognition was successful from the recognition result in the segment recognition unit 405 as a segment to be registered.
  • the registration segment holding 408 holds only a speech segment to be registered in the segment dictionary 406 , which is determined by the registration segment determination unit 407 .
  • FIG. 5 is a flow chart showing the operation of the segment dictionary formation module 2000 according to this embodiment.
  • step S 501 It is checked in step S 501 if all phonemes defined by diphones as phonetic units have been processed. If phonemes to be processed remain, the flow advances to step S 502 ; otherwise, the flow jumps to a segment recognition process in step S 504 .
  • step S 502 the speech segment search unit 401 searches the speech database 210 for speech segments that satisfy a predetermined phonetic environment, and holds a plurality of speech segments found by search in the speech segment holding unit 402 .
  • the flow then advances to step S 503 .
  • step S 503 the HMM learning unit 405 learns a HMM of a given phoneme using the found speech segments as learning data. More specifically, a total of 34-dimensional vectors (16 orders of cepstra, 16 orders of delta cepstra, power, and delta power) are computed from a sampling rate of 22050 Hz of a speech waveform every frame duration of 2.5 msec using a window duration of 25.6 msec.
  • power and delta power values are normalized to the range from “0” to “1” in units of sentences in the speech database.
  • a HMM initial model of a 5-state 1-mixture distribution is formed, and a HMM is learned using the cepstrum vectors under the aforementioned conditions. After the HMM of a given phoneme obtained as a result of learning is held in the HMM holding unit 404 , the flow returns to step S 501 to obtain a HMM of the next phoneme.
  • step S 504 the segment recognition unit 405 performs segment recognition of all the speech segments found in step S 502 using the HMMs of the phoneme strings. That is, a likelihood between a speech segment and the HMM of each phoneme is computed in units of speech segments.
  • the flow then advances to step S 505 to obtain a HMM with the maximum likelihood with a given speech segment in units of speech segments, and it is checked if that speech segment is used in learning of that HMM. If the speech segment is used in learning of that HMM, it is determined that segment recognition was successful, and the flow advances to step S 506 to register that speech segment in the segment dictionary 506 .
  • step S 505 determines whether the speech segment is not the one used in learning of the HMM. If it is determined in step S 507 that the speech segment is not registered in step S 206 , and the flow advances to step S 508 without registering the speech segment in the segment dictionary 206 .
  • step S 508 the flow advances to step S 508 to check if a discrimination process for all the speech segments used in learning of HMMs of all the phonemes in step S 504 is complete. If NO in step S 508 , the flow returns to step S 505 to repeat the aforementioned process.
  • HMMs corresponding to respective phonemes are learned using a plurality of speech segments that satisfy a predetermined phonetic environment, all the speech segments used in learning of HMMs undergo segment recognition using the learned HMMs, and only a speech segment which is determined to be used in learning of the maximum likelihood HMM is registered in the segment dictionary.
  • a segment dictionary from which speech segments including allophone and noise are excluded can be formed, and a segment dictionary which can suppress deterioration of sound quality of synthetic speech can be provided.
  • synthetic speech is produced using the segment dictionary 206 formed according to the aforementioned procedure, deterioration of sound quality of synthetic speech can be suppressed.
  • the HMM learning unit 402 generates HMMs in units of phonemes, and the segment recognition unit 405 computes the likelihoods for all the speech segments used in learning of the HMMs.
  • the present invention is not limited to this.
  • phonemes may be categorized into four categories: CC, CV, VC, and VV, and speech segments that belong to the same category may undergo segment recognition.
  • C represents a consonant
  • V a vowel.
  • a speech segment which is not successfully recognized is not registered.
  • the present invention is not limited to this.
  • a table that describes allowable recognition error patterns in advance is prepared, and if a speech segment which is not successfully recognized matches an allowable pattern prepared in that table, the registration segment determination unit 407 determines that the speech segment can be registered in the segment dictionary 206 .
  • FIG. 6 shows an example of an allowable table according to the third embodiment.
  • FIG. 6 shows an example that adopts diphones as phonemes.
  • a speech segment which is used in learning of an HMM of a diphone “a.y” is recognized as “a.i”
  • a speech segment which is used in learning of an HMM of a diphone “a.k” is recognized as “a.p” or “a.t”
  • such speech segment is registered in the segment dictionary as an allowable one.
  • FIG. 7 is a flow chart showing the processing in such case. This processing is executed when it is determined in step S 505 in FIG. 5 that the speech segment of interest is not used in learning of the corresponding HMM.
  • the flow advances to step S 601 to search the allowable table (provided to the registration segment determination unit 407 ) so as to check if the diphone of the recognition result is found in that table. If it is found, the flow advances to step S 506 in FIG. 5 to register that speech segment in the segment dictionary 206 ; otherwise, the flow advances to step S 507 not to register that segment in the segment dictionary 206 .
  • a speech segment which is not successfully recognized is not registered.
  • the present invention is not limited to this, and when a phoneme in which the number of segments that are successfully recognized is equal to or smaller than a threshold value belongs to, e.g., a category VC, that phoneme may be allowed if the V part matches.
  • the likelihoods of each speech segment with the HMMs of all phonemes obtained in step S 503 are computed.
  • the present invention is not limited to this. For example, likelihoods between an HMM of a given phoneme and speech segments used in learning of that HMM are computed, and N (N is an integer) best speech segments in descending order of likelihood may be registered, or only a speech segment having a likelihood equal to or higher than a predetermined threshold value may be registered.
  • the likelihoods computed in step S 504 are compared without being normalized.
  • Each likelihood may be normalized by the duration of the corresponding speech segment, and a speech segment to be registered may be selected using the normalized likelihood in the above procedure.
  • the respective units are constructed on a single computer.
  • the present invention is not limited to such specific arrangement, and the respective units may be divisionally constructed on computers or processing apparatuses distributed on a network.
  • the program is held in the control memory (ROM).
  • ROM control memory
  • the present invention is not limited to such specific arrangement, and the program may be implemented using an arbitrary storage medium such as an external storage or the like. Alternatively, the program may be implemented by a circuit that can attain the same operation.
  • the present invention may be applied to either a system constituted by a plurality of devices, or an apparatus consisting of a single equipment.
  • the present invention is also achieved by supplying a recording medium, which records a program code of software that can implement the functions of the above-mentioned embodiments to the system or apparatus, and reading out and executing the program code stored in the recording medium by a computer (or a CPU or MPU) of the system or apparatus.
  • the program code itself read out from the recording medium implements the functions of the above-mentioned embodiments, and the recording medium which records the program code constitutes the present invention.
  • the recording medium for supplying the program code for example, a floppy disk, hard disk, optical disk, magneto-optical disk, CD-ROM, CD-R, magnetic tape, nonvolatile memory card, ROM, and the like may be used.
  • the functions of the above-mentioned embodiments may be implemented not only by executing the readout program code by the computer but also by some or all of actual processing operations executed by an OS (operating system) running on the computer on the basis of an instruction of the program code.
  • the functions of the above-mentioned embodiments may be implemented by some or all of actual processing operations executed by a CPU or the like arranged in a function extension board or a function extension unit, which is inserted in or connected to the computer, after the program code read out from the recording medium is written in a memory of the extension board or unit.
  • a speech synthesis apparatus and method which can exclude speech segments that include allophone or noise, and can produce synthetic speech which suffers less deterioration of sound quality, since speech segments to be registered in the segment dictionary are selected by exploiting the segment recognition results obtained using HMMs, can be provided.

Abstract

A speech segment search unit searches a speech database for speech segments that satisfy a phonetic environment, and a HMM learning unit computes the HMMs of phonemes on the basis of the search result. A segment recognition unit performs segment recognition of speech segments on the basis of the computed HMMs of the phonemes, and when the phoneme of the segment recognition result is equal to a phoneme of the source speech segment, that speech segment is registered in a segment dictionary.

Description

    FIELD OF THE INVENTION
  • The present invention relates to a speech signal processing apparatus and method for forming a segment dictionary used in speech synthesis, and a storage medium.
  • BACKGROUND OF THE INVENTION
  • In recent years, a speech synthesis method in which speech segments in units of phonemes, diphones, or the like are registered in a segment dictionary, the segment dictionary is searched in accordance with input phonetic text upon producing synthetic speech, and synthetic speech corresponding to the phonetic text is produced by modifying and concatenating found speech segments to output speech has become the mainstream.
  • In such speech synthesis method, the quality of each speech segment itself registered in the segment dictionary is important. Therefore, if phonetic environments of speech segments are not constant or the speech segments include noise, synthetic speech produced using such speech segments includes allophone or noise even when speech synthesis is done with higher precision.
  • SUMMARY OF THE INVENTION
  • The present invention has been made in consideration of the aforementioned prior art, and has as its object to provide a speech signal processing apparatus and method, which make segment recognition using HMM and register a speech segment in a dictionary in accordance with the recognition result, and a storage medium.
  • It is another object of the present invention to provide a speech signal processing apparatus and method, which form a segment dictionary that can prevent sound quality in synthetic speech from deteriorating, and a storage medium.
  • Other features and advantages of the present invention will be apparent from the following descriptions taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the descriptions, serve to explain the principle of the invention.
  • FIG. 1 is a block diagram showing the hardware arrangement of a speech synthesis apparatus according to an embodiment of the present invention;
  • FIG. 2 is a block diagram showing the module arrangement of a speech synthesis apparatus according to the first embodiment of the present invention;
  • FIG. 3 is a flow chart showing the flow of processing in an on-line module according to the first embodiment;
  • FIG. 4 is a block diagram showing the detailed arrangement of an off-line module according to the first embodiment;
  • FIG. 5 is a flow chart showing the flow of processing in the off-line module according to the first embodiment;
  • FIG. 6 shows the format of a table that stores error recognition allowable patterns according to the third embodiment of the present invention; and
  • FIG. 7 is a flow chart showing the flow of processing in an off-line module according to the third embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Preferred embodiments of the present invention will be described in detail hereinafter with reference to the accompanying drawings.
  • First Embodiment
  • FIG. 1 is a block diagram showing the hardware arrangement of a speech synthesis apparatus according to an embodiment of the present invention. Note that this embodiment will exemplify a case wherein a general personal computer is used as a speech synthesis apparatus, but the present invention can be practiced using a dedicated speech synthesis apparatus or other apparatuses.
  • Referring to FIG. 1, reference numeral 101 denotes a control memory (ROM) which stores various control data used by a central processing unit (CPU) 102. The CPU 102 controls the operation of the overall apparatus by executing a control program stored in a RAM 103. Reference numeral 103 denotes a memory (RAM) which is used as a work area upon execution of various control processes by the CPU 102 to temporarily save various data, and loads and stores a control program from an external storage device 104 upon executing various processes by the CPU 102. This external storage device includes, e.g., a hard disk, CD-ROM, or the like. Reference numeral 105 denotes a D/A converter for converting input digital data that represents a speech signal into an analog signal by outputting the analog signal to a loudspeaker 109. Reference numeral 106 denotes an input unit which comprises, e.g., a keyboard and a pointing device such as a mouse or the like, which are operated by the operator. Reference numeral 107 denotes a display unit which comprises a CRT display, liquid crystal display, or the like. Reference numeral 108 denotes a bus which connects those units. Reference numeral 110 denotes a speech synthesis unit.
  • In the above arrangement, a control program for controlling the speech synthesis unit 110 of this embodiment is loaded from the external storage device 104, and is stored on the RAM 103. Various data used by this control program are stored in the control memory 101. Those data are fetched onto the memory 103 as needed via the bus 108 under the control of the CPU 102, and are used in the control processes of the CPU 102. The D/A converter 105 converts speech waveform data produced by executing the control program into an analog signal, and outputs the analog signal to the loudspeaker 109.
  • FIG. 2 is a block diagram showing the module arrangement of the speech synthesis unit 110 according to this embodiment. The speech synthesis unit 110 roughly has two modules, i.e., a segment dictionary formation module 2000 for executing a process for registering speech segments in a segment dictionary 206, and a speech synthesis module 2001 for receiving text data, and executing a process for synthesizing and outputting speech corresponding to that text data.
  • Referring to FIG. 2, reference numeral 201 denotes a text input unit for receiving arbitrary text data from the input unit 106 or external storage device 104; 202, an analysis dictionary; 203, a language analyzer; 204, a prosody generation rule holding unit; 205, a prosody generator; 206, a segment dictionary; 207, a speech segment selector; 208, a speech segment modification/concatenation unit for modifying speech segments using PSOLA (Pitch Synchronous Overlap and Add); 209, a speech waveform output unit; 210, a speech database; and 211, a segment dictionary formation unit.
  • The process in the speech synthesis module 2001 will be explained first. In the speech synthesis module 2001, the language analyzer 203 executes language analysis of text input from the text input unit 201 by looking up the analysis dictionary 202. The analysis result is input to the prosody generator 205. The prosody generator 205 generates a phoneme and prosody information on the basis of the analysis result of the language analyzer 203 and information that pertains to prosody generation rules held in the prosody generation rule holding unit 204, and outputs them to the speech segment selector 207 and speech segment modification/concatenation unit 208. Subsequently, the speech segment selector 207 selects corresponding speech segments from those held in the segment dictionary 206 using the prosody generation result input from the prosody generator 205. The speech segment modification/concatenation unit 208 modifies and concatenates speech segments output from the speech segment selector 207 in accordance with the prosody generation result input from the prosody generator 205 to generate a speech waveform. The generated speech waveform is output by the speech waveform output unit 209.
  • The segment dictionary formation module 2000 will be explained below.
  • In the process of this module, the segment dictionary formation unit 211 selects speech segments from the speech database 210 and registers them in the segment dictionary 206 on the basis of a procedure to be described later.
  • A speech synthesis process of this embodiment with the above arrangement will be described below.
  • FIG. 3 is a flow chart showing the flow of a speech synthesis process (on-line process) in the speech synthesis module 2001 shown in FIG. 2.
  • In step S301, the text input unit 201 inputs text data in units of sentences, clauses, words, or the like, and the flow advances to step S302. In step S302, the language analyzer 203 executes language analysis of the text data. The flow advances to step S303, and the prosody generator 205 generates a phoneme and prosody information on the basis of the analysis result obtained in step S302, and predetermined prosodic rules. The flow advances to step S304, and the speech segment selector 207 selects for each phoneme speech segments registered in the segment dictionary 206 on the basis of the prosody information obtained in step S303 and a predetermined phonetic environment. The flow advances to step S305, and the speech segment modification/concatenation unit 208 modifies and concatenates speech segments on the basis of the selected speech segments and the prosody information generated in step S303. The flow then advances to step S306. In step S306, the speech waveform output unit 209 outputs a speech waveform produced by the speech segment modification/concatenation unit 208 as a speech signal. In this way, synthetic speech corresponding to the input text is output.
  • FIG. 4 is a block diagram showing the more detailed arrangement of the segment dictionary formation module 2000 in FIG. 2. The same reference numerals in FIG. 4 denote the same parts as in FIG. 2, and FIG. 4 shows the arrangement of the segment dictionary formation unit 211 as a characteristic feature of this embodiment in more detail.
  • Referring to FIG. 4, reference numeral 401 denotes a speech segment search unit; 402, a speech segment holding unit; 403, a HMM learning unit; 404, a HMM holding unit; 405, a segment recognition unit; 406, a recognition result holding unit; 407, a registration segment determination unit; and 408, a registration segment holding unit. Note that reference numeral 210 denotes the speech database shown in FIG. 2.
  • The speech segment search unit 401 searches the speech database 210 for speech segments that satisfy a predetermined phonetic environment. In this case, a plurality of speech segments are found. The speech segment holding unit 402 holds these found speech segments. The HMM learning unit 403 computes the cepstra of the speech segments held in the speech segment holding unit 402 by computing, e.g., the Fourier transforms of waveforms of these speech segments, and computes and outputs the HMMs of phonemes on the basis of the computation results. The HMM holding unit 404 holds learning results (HMMs) in units of phonemes. The segment recognition unit 405 makes segment recognition of all speech segments used in learning of HMMs using the learned HMMs to obtain a HMM with a maximum likelihood (maximum likelihood HMM). It is then checked if the speech segment of interest is the same phoneme to the maximum likelihood HMM. The recognition result holding unit 406 holds that segment recognition result. The registration segment determination unit 407 adopts only a speech segment for which segment recognition was successful from the recognition result in the segment recognition unit 405 as a segment to be registered. The registration segment holding 408 holds only a speech segment to be registered in the segment dictionary 406, which is determined by the registration segment determination unit 407.
  • FIG. 5 is a flow chart showing the operation of the segment dictionary formation module 2000 according to this embodiment.
  • It is checked in step S501 if all phonemes defined by diphones as phonetic units have been processed. If phonemes to be processed remain, the flow advances to step S502; otherwise, the flow jumps to a segment recognition process in step S504.
  • In step S502, the speech segment search unit 401 searches the speech database 210 for speech segments that satisfy a predetermined phonetic environment, and holds a plurality of speech segments found by search in the speech segment holding unit 402. The flow then advances to step S503. In step S503, the HMM learning unit 405 learns a HMM of a given phoneme using the found speech segments as learning data. More specifically, a total of 34-dimensional vectors (16 orders of cepstra, 16 orders of delta cepstra, power, and delta power) are computed from a sampling rate of 22050 Hz of a speech waveform every frame duration of 2.5 msec using a window duration of 25.6 msec. Note that power and delta power values are normalized to the range from “0” to “1” in units of sentences in the speech database. A HMM initial model of a 5-state 1-mixture distribution is formed, and a HMM is learned using the cepstrum vectors under the aforementioned conditions. After the HMM of a given phoneme obtained as a result of learning is held in the HMM holding unit 404, the flow returns to step S501 to obtain a HMM of the next phoneme.
  • In step S504, the segment recognition unit 405 performs segment recognition of all the speech segments found in step S502 using the HMMs of the phoneme strings. That is, a likelihood between a speech segment and the HMM of each phoneme is computed in units of speech segments. The flow then advances to step S505 to obtain a HMM with the maximum likelihood with a given speech segment in units of speech segments, and it is checked if that speech segment is used in learning of that HMM. If the speech segment is used in learning of that HMM, it is determined that segment recognition was successful, and the flow advances to step S506 to register that speech segment in the segment dictionary 506.
  • On the other hand, if it is determined in step S505 that the speech segment is not the one used in learning of the HMM, it is determined in step S507 that the speech segment is not registered in step S206, and the flow advances to step S508 without registering the speech segment in the segment dictionary 206. After the process in step S506 or S507 is executed, the flow advances to step S508 to check if a discrimination process for all the speech segments used in learning of HMMs of all the phonemes in step S504 is complete. If NO in step S508, the flow returns to step S505 to repeat the aforementioned process.
  • As described above, according to the first embodiment, HMMs corresponding to respective phonemes are learned using a plurality of speech segments that satisfy a predetermined phonetic environment, all the speech segments used in learning of HMMs undergo segment recognition using the learned HMMs, and only a speech segment which is determined to be used in learning of the maximum likelihood HMM is registered in the segment dictionary.
  • With this arrangement, a segment dictionary from which speech segments including allophone and noise are excluded can be formed, and a segment dictionary which can suppress deterioration of sound quality of synthetic speech can be provided. When synthetic speech is produced using the segment dictionary 206 formed according to the aforementioned procedure, deterioration of sound quality of synthetic speech can be suppressed.
  • Second Embodiment
  • In the first embodiment, the HMM learning unit 402 generates HMMs in units of phonemes, and the segment recognition unit 405 computes the likelihoods for all the speech segments used in learning of the HMMs. However, the present invention is not limited to this. For example, when diphones are used as phonemes, phonemes may be categorized into four categories: CC, CV, VC, and VV, and speech segments that belong to the same category may undergo segment recognition. Note that C represents a consonant, and V a vowel.
  • Third Embodiment
  • In the first and second embodiments, a speech segment which is not successfully recognized is not registered. However, the present invention is not limited to this. For example, a table that describes allowable recognition error patterns in advance is prepared, and if a speech segment which is not successfully recognized matches an allowable pattern prepared in that table, the registration segment determination unit 407 determines that the speech segment can be registered in the segment dictionary 206.
  • FIG. 6 shows an example of an allowable table according to the third embodiment.
  • FIG. 6 shows an example that adopts diphones as phonemes. In this case, even when a speech segment which is used in learning of an HMM of a diphone “a.y” is recognized as “a.i”, or even when a speech segment which is used in learning of an HMM of a diphone “a.k” is recognized as “a.p” or “a.t”, such speech segment is registered in the segment dictionary as an allowable one.
  • FIG. 7 is a flow chart showing the processing in such case. This processing is executed when it is determined in step S505 in FIG. 5 that the speech segment of interest is not used in learning of the corresponding HMM. The flow advances to step S601 to search the allowable table (provided to the registration segment determination unit 407) so as to check if the diphone of the recognition result is found in that table. If it is found, the flow advances to step S506 in FIG. 5 to register that speech segment in the segment dictionary 206; otherwise, the flow advances to step S507 not to register that segment in the segment dictionary 206.
  • Fourth Embodiment
  • In the second embodiment above, when diphones are used as phonemes, a speech segment which is not successfully recognized is not registered. However, the present invention is not limited to this, and when a phoneme in which the number of segments that are successfully recognized is equal to or smaller than a threshold value belongs to, e.g., a category VC, that phoneme may be allowed if the V part matches.
  • Fifth Embodiment
  • In the first embodiment, the likelihoods of each speech segment with the HMMs of all phonemes obtained in step S503 are computed. However, the present invention is not limited to this. For example, likelihoods between an HMM of a given phoneme and speech segments used in learning of that HMM are computed, and N (N is an integer) best speech segments in descending order of likelihood may be registered, or only a speech segment having a likelihood equal to or higher than a predetermined threshold value may be registered.
  • Sixth Embodiment
  • In the first to fifth embodiments, the likelihoods computed in step S504 are compared without being normalized. However, the present invention is not limited to this. Each likelihood may be normalized by the duration of the corresponding speech segment, and a speech segment to be registered may be selected using the normalized likelihood in the above procedure.
  • In the above embodiments, the respective units are constructed on a single computer. However, the present invention is not limited to such specific arrangement, and the respective units may be divisionally constructed on computers or processing apparatuses distributed on a network.
  • In the above embodiments, the program is held in the control memory (ROM). However, the present invention is not limited to such specific arrangement, and the program may be implemented using an arbitrary storage medium such as an external storage or the like. Alternatively, the program may be implemented by a circuit that can attain the same operation.
  • Note that the present invention may be applied to either a system constituted by a plurality of devices, or an apparatus consisting of a single equipment. The present invention is also achieved by supplying a recording medium, which records a program code of software that can implement the functions of the above-mentioned embodiments to the system or apparatus, and reading out and executing the program code stored in the recording medium by a computer (or a CPU or MPU) of the system or apparatus.
  • In this case, the program code itself read out from the recording medium implements the functions of the above-mentioned embodiments, and the recording medium which records the program code constitutes the present invention.
  • As the recording medium for supplying the program code, for example, a floppy disk, hard disk, optical disk, magneto-optical disk, CD-ROM, CD-R, magnetic tape, nonvolatile memory card, ROM, and the like may be used. The functions of the above-mentioned embodiments may be implemented not only by executing the readout program code by the computer but also by some or all of actual processing operations executed by an OS (operating system) running on the computer on the basis of an instruction of the program code.
  • Furthermore, the functions of the above-mentioned embodiments may be implemented by some or all of actual processing operations executed by a CPU or the like arranged in a function extension board or a function extension unit, which is inserted in or connected to the computer, after the program code read out from the recording medium is written in a memory of the extension board or unit.
  • As described above, according to the above embodiments, a speech synthesis apparatus and method, which can exclude speech segments that include allophone or noise, and can produce synthetic speech which suffers less deterioration of sound quality, since speech segments to be registered in the segment dictionary are selected by exploiting the segment recognition results obtained using HMMs, can be provided.
  • The present invention is not limited to the above embodiments and various changes and modifications can be made within the spirit and scope of the present invention. Therefore, to apprise the public of the scope of the present invention, the following claims are made.

Claims (18)

1. A speech signal processing apparatus comprising:
HMM learning means for computing HMMs of speech segments with information indicating a phonetic environment in a speech database;
segment recognition means for performing segment recognition of the speech segments in the speech database on the basis of the HMMs; and
registration means for registering a speech segment in a segment dictionary, in a case where the recognition result of the speech segment by said segment recognition means corresponds to the information indicating the phonetic environment of the speech segment.
2-23. (canceled)
24. The apparatus according to claim 1, wherein the information indicating the phonetic environment is a diphone label, and said segment recognition means categorizes speech segments into four categories CC, CV, VC, and VV (C: a consonant, V: a vowel), and performs segment recognition in each category.
25. The apparatus according to claim 1, wherein said registration means comprises:
pattern storage means which has allowable patterns of information indicating the phonetic environment, and
said registration means checks if information indicating the phonetic environment of the speech segment matches one of the allowable patterns of information indicating the phonetic environment even if the information indicating the phonetic environment is not equal to the recognition result of said segment recognition means.
26. The apparatus according to claim 1, wherein said segment recognition means computes likelihoods of speech segments of identical information indicating the phonetic environment, and
said registration means registers, in the segment dictionary, speech segments having maximum likelihoods or having likelihoods not less than a predetermined value.
27. The apparatus according to claim 26, wherein said registration means registers, in the segment dictionary, speech segments having upper values obtained by normalizing the likelihoods by durations of the speech segments or likelihoods having the values not less than a predetermined value.
28. A speech signal processing method comprising:
an HMM learning step of computing HMMs of speech segments with information indicating a phonetic environment in a speech database;
a segment recognition step of performing segment recognition of the speech segments in the speech database on the basis of the HMMs; and
a registration step of registering a speech segment in a segment dictionary, in a case where the recognition result of the speech segment in said segment recognition step corresponds to the information indicating the phonetic environment of the speech segment.
29. The method according to claim 28, wherein the information indicating the phonetic environment is a diphone label, and said segment recognition step categorizes speech segments into four categories CC, CV, VC, and VV (C: a consonant, V: a vowel), and includes the step of performing segment recognition in each category.
30. The method according to claim 28, wherein said registration step comprises:
a pattern storage step of registering allowable patterns of information indicating the phonetic environment, and
said registration step includes a step of checking whether the information indicating the phonetic environment of the speech segment matches one of the allowable patterns of information indicating the phonetic environment even if the information indicating the phonetic environment is not equal to the result in said segment recognition step.
31. The method according to claim 28, wherein said segment recognition step includes a step of computing likelihoods of speech segments of identical information indicating the phonetic environment, and
said registration step includes a step of registering, in the segment dictionary, speech segments having maximum likelihoods or having likelihoods not less than a predetermined value.
32. The method according to claim 31, wherein said registration step includes a step of registering, in the segment dictionary, speech segments having upper values obtained by normalizing the likelihoods by durations of the speech segments or likelihoods having the values not less than a predetermined value.
33. A computer readable storage medium storing a program for implementing the method according to claim 28.
34. A speech synthesis apparatus comprising:
speech synthesis means for synthesizing speech using the segment dictionary made by the speech signal processing apparatus according to claim 1.
35. A speech synthesis method comprising:
a speech synthesis step of synthesizing speech using the segment dictionary made by the speech signal processing method according to claim 28.
36. A computer readable storage medium storing a program for implementing the method according to claim 35.
37. A speech signal processing apparatus comprising:
HMM learning means for computing HMMs of speech segments with information indicating a phonetic environment in a speech database;
segment recognition means for performing segment recognition of the speech segments in the speech database on the basis of the HMMs;
judgment means for judging whether the result of the segment recognition corresponds to the information indicating the phonetic environment of a speech segment; and
storage means for storing the result of the judgment judged by said judgment means associated with the speech segment.
38. A speech signal processing method comprising:
an HMM learning step of computing HMMs of speech segments with information indicating a phonetic environment in a speech database;
a segment recognition step of performing segment recognition of the speech segments in the speech database on the basis of the HMMs;
a judgment step of judging whether the result of the segment recognition corresponds to the information indicating the phonetic environment of a speech segment; and
a storage step of storing the result of the judgment judged in said judgment step associated with the speech segment.
39. A computer readable storage medium storing a program for implementing the method according to claim 38.
US11/126,372 2000-03-31 2005-05-11 Speech signal processing apparatus and method, and storage medium Abandoned US20050209855A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/126,372 US20050209855A1 (en) 2000-03-31 2005-05-11 Speech signal processing apparatus and method, and storage medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2000-099532 2000-03-31
JP2000099532A JP4632384B2 (en) 2000-03-31 2000-03-31 Audio information processing apparatus and method and storage medium
US09/819,613 US7054814B2 (en) 2000-03-31 2001-03-29 Method and apparatus of selecting segments for speech synthesis by way of speech segment recognition
US11/126,372 US20050209855A1 (en) 2000-03-31 2005-05-11 Speech signal processing apparatus and method, and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/819,613 Division US7054814B2 (en) 2000-03-31 2001-03-29 Method and apparatus of selecting segments for speech synthesis by way of speech segment recognition

Publications (1)

Publication Number Publication Date
US20050209855A1 true US20050209855A1 (en) 2005-09-22

Family

ID=18613872

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/819,613 Expired - Fee Related US7054814B2 (en) 2000-03-31 2001-03-29 Method and apparatus of selecting segments for speech synthesis by way of speech segment recognition
US11/126,372 Abandoned US20050209855A1 (en) 2000-03-31 2005-05-11 Speech signal processing apparatus and method, and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/819,613 Expired - Fee Related US7054814B2 (en) 2000-03-31 2001-03-29 Method and apparatus of selecting segments for speech synthesis by way of speech segment recognition

Country Status (2)

Country Link
US (2) US7054814B2 (en)
JP (1) JP4632384B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050197839A1 (en) * 2004-03-04 2005-09-08 Samsung Electronics Co., Ltd. Apparatus, medium, and method for generating record sentence for corpus and apparatus, medium, and method for building corpus using the same
US20070124148A1 (en) * 2005-11-28 2007-05-31 Canon Kabushiki Kaisha Speech processing apparatus and speech processing method
US20080177548A1 (en) * 2005-05-31 2008-07-24 Canon Kabushiki Kaisha Speech Synthesis Method and Apparatus
US20080228487A1 (en) * 2007-03-14 2008-09-18 Canon Kabushiki Kaisha Speech synthesis apparatus and method
US20160283453A1 (en) * 2015-03-26 2016-09-29 Lenovo (Singapore) Pte. Ltd. Text correction using a second input

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039588B2 (en) * 2000-03-31 2006-05-02 Canon Kabushiki Kaisha Synthesis unit selection apparatus and method, and storage medium
US6950798B1 (en) * 2001-04-13 2005-09-27 At&T Corp. Employing speech models in concatenative speech synthesis
JP2003295882A (en) 2002-04-02 2003-10-15 Canon Inc Text structure for speech synthesis, speech synthesizing method, speech synthesizer and computer program therefor
JP3673507B2 (en) * 2002-05-16 2005-07-20 独立行政法人科学技術振興機構 APPARATUS AND PROGRAM FOR DETERMINING PART OF SPECIFIC VOICE CHARACTERISTIC CHARACTERISTICS, APPARATUS AND PROGRAM FOR DETERMINING PART OF SPEECH SIGNAL CHARACTERISTICS WITH HIGH RELIABILITY, AND Pseudo-Syllable Nucleus Extraction Apparatus and Program
JP4587160B2 (en) * 2004-03-26 2010-11-24 キヤノン株式会社 Signal processing apparatus and method
JP4328698B2 (en) * 2004-09-15 2009-09-09 キヤノン株式会社 Fragment set creation method and apparatus
US7979718B2 (en) * 2005-03-31 2011-07-12 Pioneer Corporation Operator recognition device, operator recognition method and operator recognition program
JP4773988B2 (en) * 2007-02-06 2011-09-14 日本電信電話株式会社 Hybrid type speech synthesis method, apparatus thereof, program thereof, and storage medium thereof
US8543393B2 (en) * 2008-05-20 2013-09-24 Calabrio, Inc. Systems and methods of improving automated speech recognition accuracy using statistical analysis of search terms
US20100105015A1 (en) * 2008-10-23 2010-04-29 Judy Ravin System and method for facilitating the decoding or deciphering of foreign accents
JP5326546B2 (en) * 2008-12-19 2013-10-30 カシオ計算機株式会社 Speech synthesis dictionary construction device, speech synthesis dictionary construction method, and program
US8965768B2 (en) * 2010-08-06 2015-02-24 At&T Intellectual Property I, L.P. System and method for automatic detection of abnormal stress patterns in unit selection synthesis
US9778826B2 (en) * 2011-05-24 2017-10-03 Indu Mati Anand Method and system for computer-aided consumption of information from application data files
JP5842452B2 (en) * 2011-08-10 2016-01-13 カシオ計算機株式会社 Speech learning apparatus and speech learning program
US8725508B2 (en) * 2012-03-27 2014-05-13 Novospeech Method and apparatus for element identification in a signal
JP6535998B2 (en) * 2014-09-16 2019-07-03 カシオ計算機株式会社 Voice learning device and control program

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5311429A (en) * 1989-05-17 1994-05-10 Hitachi, Ltd. Maintenance support method and apparatus for natural language processing system
US5787396A (en) * 1994-10-07 1998-07-28 Canon Kabushiki Kaisha Speech recognition method
US5812975A (en) * 1995-06-19 1998-09-22 Canon Kabushiki Kaisha State transition model design method and voice recognition method and apparatus using same
US5845047A (en) * 1994-03-22 1998-12-01 Canon Kabushiki Kaisha Method and apparatus for processing speech information using a phoneme environment
US5913193A (en) * 1996-04-30 1999-06-15 Microsoft Corporation Method and system of runtime acoustic unit selection for speech synthesis
US5926784A (en) * 1997-07-17 1999-07-20 Microsoft Corporation Method and system for natural language parsing using podding
US5970445A (en) * 1996-03-25 1999-10-19 Canon Kabushiki Kaisha Speech recognition using equal division quantization
US6000024A (en) * 1997-10-15 1999-12-07 Fifth Generation Computer Corporation Parallel computing system
US6021388A (en) * 1996-12-26 2000-02-01 Canon Kabushiki Kaisha Speech synthesis apparatus and method
US6076061A (en) * 1994-09-14 2000-06-13 Canon Kabushiki Kaisha Speech recognition apparatus and method and a computer usable medium for selecting an application in accordance with the viewpoint of a user
US6108628A (en) * 1996-09-20 2000-08-22 Canon Kabushiki Kaisha Speech recognition method and apparatus using coarse and fine output probabilities utilizing an unspecified speaker model
US6236962B1 (en) * 1997-03-13 2001-05-22 Canon Kabushiki Kaisha Speech processing apparatus and method and computer readable medium encoded with a program for recognizing input speech by performing searches based on a normalized current feature parameter
US6266636B1 (en) * 1997-03-13 2001-07-24 Canon Kabushiki Kaisha Single distribution and mixed distribution model conversion in speech recognition method, apparatus, and computer readable medium
US20010043344A1 (en) * 1994-11-14 2001-11-22 Takashi Imai Image processing apparatus capable of connecting external information processing terminal, and including printer unit and data processing unit
US6374210B1 (en) * 1998-11-30 2002-04-16 U.S. Philips Corporation Automatic segmentation of a text
US6385339B1 (en) * 1994-09-14 2002-05-07 Hitachi, Ltd. Collaborative learning system and pattern recognition method
US20020095282A1 (en) * 2000-12-11 2002-07-18 Silke Goronzy Method for online adaptation of pronunciation dictionaries
US20020107688A1 (en) * 1998-03-10 2002-08-08 Mitsuru Otsuka Speech synthesizing method and apparatus
US6662159B2 (en) * 1995-11-01 2003-12-09 Canon Kabushiki Kaisha Recognizing speech data using a state transition model

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0573100A (en) 1991-09-11 1993-03-26 Canon Inc Method and device for synthesising speech
JP3397372B2 (en) 1993-06-16 2003-04-14 キヤノン株式会社 Speech recognition method and apparatus
JPH0792997A (en) * 1993-09-22 1995-04-07 N T T Data Tsushin Kk Speech synthesizing device
JPH07114568A (en) * 1993-10-20 1995-05-02 Brother Ind Ltd Data retrieval device
JP3548230B2 (en) 1994-05-30 2004-07-28 キヤノン株式会社 Speech synthesis method and apparatus
JP3559588B2 (en) 1994-05-30 2004-09-02 キヤノン株式会社 Speech synthesis method and apparatus
US5905972A (en) * 1996-09-30 1999-05-18 Microsoft Corporation Prosodic databases holding fundamental frequency templates for use in speech synthesis
JPH10161692A (en) * 1996-12-03 1998-06-19 Canon Inc Voice recognition device, and method of recognizing voice
JPH11126094A (en) * 1997-10-21 1999-05-11 Toyo Commun Equip Co Ltd Voice synthesizing device
JPH11327594A (en) * 1998-05-13 1999-11-26 Ricoh Co Ltd Voice synthesis dictionary preparing system

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5311429A (en) * 1989-05-17 1994-05-10 Hitachi, Ltd. Maintenance support method and apparatus for natural language processing system
US5845047A (en) * 1994-03-22 1998-12-01 Canon Kabushiki Kaisha Method and apparatus for processing speech information using a phoneme environment
US6076061A (en) * 1994-09-14 2000-06-13 Canon Kabushiki Kaisha Speech recognition apparatus and method and a computer usable medium for selecting an application in accordance with the viewpoint of a user
US6385339B1 (en) * 1994-09-14 2002-05-07 Hitachi, Ltd. Collaborative learning system and pattern recognition method
US5787396A (en) * 1994-10-07 1998-07-28 Canon Kabushiki Kaisha Speech recognition method
US20010043344A1 (en) * 1994-11-14 2001-11-22 Takashi Imai Image processing apparatus capable of connecting external information processing terminal, and including printer unit and data processing unit
US6333794B2 (en) * 1994-11-14 2001-12-25 Canon Kabushiki Kaisha Image processing apparatus capable of connecting external information processing terminal, and including printer unit and data processing unit
US5812975A (en) * 1995-06-19 1998-09-22 Canon Kabushiki Kaisha State transition model design method and voice recognition method and apparatus using same
US6662159B2 (en) * 1995-11-01 2003-12-09 Canon Kabushiki Kaisha Recognizing speech data using a state transition model
US5970445A (en) * 1996-03-25 1999-10-19 Canon Kabushiki Kaisha Speech recognition using equal division quantization
US5913193A (en) * 1996-04-30 1999-06-15 Microsoft Corporation Method and system of runtime acoustic unit selection for speech synthesis
US6108628A (en) * 1996-09-20 2000-08-22 Canon Kabushiki Kaisha Speech recognition method and apparatus using coarse and fine output probabilities utilizing an unspecified speaker model
US6021388A (en) * 1996-12-26 2000-02-01 Canon Kabushiki Kaisha Speech synthesis apparatus and method
US6236962B1 (en) * 1997-03-13 2001-05-22 Canon Kabushiki Kaisha Speech processing apparatus and method and computer readable medium encoded with a program for recognizing input speech by performing searches based on a normalized current feature parameter
US6266636B1 (en) * 1997-03-13 2001-07-24 Canon Kabushiki Kaisha Single distribution and mixed distribution model conversion in speech recognition method, apparatus, and computer readable medium
US5926784A (en) * 1997-07-17 1999-07-20 Microsoft Corporation Method and system for natural language parsing using podding
US6000024A (en) * 1997-10-15 1999-12-07 Fifth Generation Computer Corporation Parallel computing system
US20020107688A1 (en) * 1998-03-10 2002-08-08 Mitsuru Otsuka Speech synthesizing method and apparatus
US6546367B2 (en) * 1998-03-10 2003-04-08 Canon Kabushiki Kaisha Synthesizing phoneme string of predetermined duration by adjusting initial phoneme duration on values from multiple regression by adding values based on their standard deviations
US6374210B1 (en) * 1998-11-30 2002-04-16 U.S. Philips Corporation Automatic segmentation of a text
US20020095282A1 (en) * 2000-12-11 2002-07-18 Silke Goronzy Method for online adaptation of pronunciation dictionaries

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050197839A1 (en) * 2004-03-04 2005-09-08 Samsung Electronics Co., Ltd. Apparatus, medium, and method for generating record sentence for corpus and apparatus, medium, and method for building corpus using the same
US8635071B2 (en) * 2004-03-04 2014-01-21 Samsung Electronics Co., Ltd. Apparatus, medium, and method for generating record sentence for corpus and apparatus, medium, and method for building corpus using the same
US20080177548A1 (en) * 2005-05-31 2008-07-24 Canon Kabushiki Kaisha Speech Synthesis Method and Apparatus
US20070124148A1 (en) * 2005-11-28 2007-05-31 Canon Kabushiki Kaisha Speech processing apparatus and speech processing method
US20080228487A1 (en) * 2007-03-14 2008-09-18 Canon Kabushiki Kaisha Speech synthesis apparatus and method
US8041569B2 (en) 2007-03-14 2011-10-18 Canon Kabushiki Kaisha Speech synthesis method and apparatus using pre-recorded speech and rule-based synthesized speech
US20160283453A1 (en) * 2015-03-26 2016-09-29 Lenovo (Singapore) Pte. Ltd. Text correction using a second input
US10726197B2 (en) * 2015-03-26 2020-07-28 Lenovo (Singapore) Pte. Ltd. Text correction using a second input

Also Published As

Publication number Publication date
US7054814B2 (en) 2006-05-30
JP4632384B2 (en) 2011-02-16
US20020051955A1 (en) 2002-05-02
JP2001282277A (en) 2001-10-12

Similar Documents

Publication Publication Date Title
US20050209855A1 (en) Speech signal processing apparatus and method, and storage medium
US6826531B2 (en) Speech information processing method and apparatus and storage medium using a segment pitch pattern model
US6910012B2 (en) Method and system for speech recognition using phonetically similar word alternatives
JP4328698B2 (en) Fragment set creation method and apparatus
US6778960B2 (en) Speech information processing method and apparatus and storage medium
US5949961A (en) Word syllabification in speech synthesis system
EP1557821B1 (en) Segmental tonal modeling for tonal languages
JP4936696B2 (en) Testing and tuning an automatic speech recognition system using synthetic inputs generated from an acoustic model of the speech recognition system
US6839667B2 (en) Method of speech recognition by presenting N-best word candidates
US7039588B2 (en) Synthesis unit selection apparatus and method, and storage medium
CN101785051B (en) Voice recognition device and voice recognition method
US5758320A (en) Method and apparatus for text-to-voice audio output with accent control and improved phrase control
US20010047259A1 (en) Speech synthesis apparatus and method, and storage medium
US9978360B2 (en) System and method for automatic detection of abnormal stress patterns in unit selection synthesis
US6477495B1 (en) Speech synthesis system and prosodic control method in the speech synthesis system
CN1179587A (en) Prosodic databases holding fundamental frequency templates for use in speech synthesis
JP2008046538A (en) System supporting text-to-speech synthesis
US6963834B2 (en) Method of speech recognition using empirically determined word candidates
JPH0713594A (en) Method for evaluation of quality of voice in voice synthesis
CN111599339A (en) Speech splicing synthesis method, system, device and medium with high naturalness
JP2583074B2 (en) Voice synthesis method
Taylor Synthesizing intonation using the RFC model.
Altosaar et al. Finnish and Estonian speech applications developed on an object-oriented speech processing and database system
JP2721341B2 (en) Voice recognition method
Majji Building a Tamil Text-to-Speech Synthesizer using Festival

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION