CA1268546A - Stereophonic voice signal transmission system - Google Patents

Stereophonic voice signal transmission system

Info

Publication number
CA1268546A
CA1268546A CA000517216A CA517216A CA1268546A CA 1268546 A CA1268546 A CA 1268546A CA 000517216 A CA000517216 A CA 000517216A CA 517216 A CA517216 A CA 517216A CA 1268546 A CA1268546 A CA 1268546A
Authority
CA
Canada
Prior art keywords
voice signal
signal
sub
voice
transfer function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000517216A
Other languages
French (fr)
Other versions
CA1268546C (en
Inventor
Shigenobu Minami
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP60191746A external-priority patent/JPH0758938B2/en
Priority claimed from JP8284086A external-priority patent/JPH0761043B2/en
Application filed by Toshiba Corp filed Critical Toshiba Corp
Application granted granted Critical
Publication of CA1268546C publication Critical patent/CA1268546C/en
Publication of CA1268546A publication Critical patent/CA1268546A/en
Expired legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/86Arrangements characterised by the broadcast information itself
    • H04H20/88Stereophonic broadcast systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • H04M3/561Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities by multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/007Two-channel systems in which the audio signals are in digital form
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 

Abstract

ABSTRACT OF THE DISCLOSURE
In a stereophonic voice transmission system for transmitting voice signals of a plurality of channels, a transmitting end encodes main data consisting of at least one channel voice signal among a plurality of voice signals of a plurality of channels and additional data required for reproducing the voice signals of the remaining channels from the main data, and transmits coded main data and the coded additional data to a receiving end. The receiving end decodes the coded main data, and encodes and combines the main data with the additional data to reproduce the voice signals of the remaining channels, thereby transmitting stereophonic voice signals along a transmission line of a low transmission rate, with high quality at low cost.

Description

~;~61~5~

The present invention relates to a voice transmission system and, more particularly, to a stereophonic voice signal transmission system.

The present invention will be illustrated by way of the accompanying drawings, in which:-Fig. 1 is a schematic vlew o a stereophonlc voice transmission system according to a first embodiment of the pre-lo sent invention;

Fig. 2 is a schematlc view of a stereophonic voice transmission system according to a second embodiment of the pre-sent invention;
Fig.s 3(~), 3~B) and 3~C) are graphs showing impulse response according to the second embodiment;

Fig.s 4(A) and 4(B) are block diagrams respectively showing transmitting and receiving ends in accordance with the method of the second embodiment;

Fig. 5 is a chart showing a frame structure of the sec-ond embodiment;
Fig. 6 is a timing chart for explaining the operation of the second embodiment;

Fig. 7 is a schematic view of a stereophonic voice transmlssion system according to a third embodlment of the pre-sent invention;

Fig. 8 is a schematlc vlew of a s-tereophonlc volce transmission system accordiny to a fourth embodiment of the pre-sent invention;

s~

Fig.s 9(A) and 9ts) are block diagrams of an estimatoraccording to the fourth embodiment;

Fig. 10 is a schematic viPw showing a s-tereophonic voice transmission system according to a fifth embodiment of the present invention;

Fig. 11 is a circult diagram showing a detailed arrangement of a sixth embodlment of the present invention;

Fig. 12 is a block diagram of an adaptive quantizer and a dequantizer used in the circuit of Fig. 11;

Fig. 13 is a schematic view of a conventional telecon-ferencing system; and Fig. 14 is a schematic view showing a conventionalstereophonic voice transmission system.

Along with the development of telecommunications tech-niques, demand has recently arisen for teleconEerencing systems for allowing attendants at remote locatlons to participate in a teleconference.

Conventional teleconferencing systems are adapted to transmlt and receive image data (e.g., television image data, electronic blackboard data, and facsimile data) and voice data between remote terminals, so information transmission cost must be desirably reduced. In particular, if data can be transmitted at a blt rate of 64 bps in normal subscriber lines, a teleconfer-encing system can be realized at low cost as compared with a high-quality teleconferenclng system using optical fibers. A
solution for low-cost telecorrferencing is deemed to be the key to popularity and widespread applications of teleconferencing sys-tems in small and medium business corpora-tions and at home when an ISDN (Integral Service Digital Network) for digitiæing commu-~2~ 6 nications systems for individual subscribers is established.

In a teleconferencing system using, for example, a 64~bps transmission line, it is necessary to compress a large number of pieces of image and voice lnformation so as not to interfere with conference proceedings.

Fig. 13 shows an overall system conflguration of a con-ventional t~leconferencing system. This system comprises a microphone 1, a loudspeaker 2, a television camera 3 as a man-machine image interface, a television monitor 4, an electronic blackboard 5, a facslmile system 6, a telewriting device 7, a voice unit 8 for coding voice data to 16-kbps data or decoding 16-kbps data to voice data, a control unit 9 (to be described in detail), a control pad 10 for inputtiny instructions to the con-trol unit 9, an image unit 11 for coding an image data designated by the control unit 9 to 48-kbps data or decoding 48-kbps data to image data, and a transmission unit 13 for transmitting and receiving voice and image signals through a 64-kbps transmission line 12.
In a conventional teleconferencing system using a transmission line having a low bit rate, even monaural voice data must be compressed to 16-kbps data or the like by a voice data compression scheme such as an Adaptive Pulse Coded Modulation ~ADP~M)~ Therefore, stereophonic voice data is not used in the conventional teleconferencing system.

Stereophonic voice is deslrably adapted for a conven-tional teleconferencing system to create a feeling of being aparticipant in a conference and to discriminate between speakers.

If stereophonic voice having the above advantages ls used in a conventional teleconferencing system using the trans-mission line of a low-bit rate as described above, a stereophonic volce transmission system shown in Fig. 14 is required. In the ~2~

stereophonic voice transmission system, right- and left-channel voice signals are required to double the number of transmlsslon data as compared with that ln the monaural mode.

For this reason, i~ s-tereophonic voice i9 used in a conventional telecon~erencing system using a low-bit rate, e.g., ~ kbps accordlng to a conventional scheme, the following tech-niques are required: (a) a technique for compressing one-channel voice transmission data to ~-kbps data; and (b) a technique for reducing the bit ra-te of image transmission data from ~8 kbps to 32 kbps.

The technique (a) degrades voice quality, and the tech-nique (b) results in voice quality degradation and/or poor ser-vice.
According to the conventional stereophonic transmissionsystems, it is very difficult to use stereophonic voice in tele-conferencing system using a transmission line of a low-bit rate.

The present invention provides a stereophonic voice transmission system capable of high quality transmission and high quality voice reproduction in a transmission line of a low-bit rate.

The present invention also provides a stereophonic voice transmission syskem comprislng a small-capacity storage means and capable of transmitting and reproducing stereophonic voice of higher quality.

The present invention further provides a stereophonic voice transmission storage system capablè of transmitting and reproducing high-quality stereophonic voice at low cost.

~ccording to the present invention, there is provided a stereophonic voice transmission system for transmitting a voice 5~8 signal among the voice signals of a plurallty of channels, and additional daka re~uired for use together wlth the main data to reproduce the voice signals of remainlng channels, and sends coded main data and coded additional data, and a receiving end decodes the voice signal of each channel sent as thP coded main data and reproduces the voice signals of the remaining channels by the coded maln data and the coded additional data.

According to the present invention, in stereophGnic voice transmission, only the main voice signal and difference signals or compressed difference signals are transmitted and then are received and com~ined by a receiving end. Therefore, data transmission can be performed by using a small number of data as compared with a conven-tional transmission storage system, and high-quality stereophonic transmission or storage can be achieved at low cost.

Thus, according to one aspect thereof the present invention provides a stereophonic voice transmission system for transmitting a plurality of sound signals over a plurality of channels through a transmission line, said plurality of sound signals producing a main voice signal from a right channel and a sub-voice signal from a left channel, comprising (a) a transmit-ting end including, estimating means for estimating additional data required for generating said sub-voice signal based on a predekermined formula using sald maln volce slynal and said sub-volce slgnal as inpu-ts, and transmitting means for coding and transmitting said main voice signal and sald addltlonal data; and ~b) a receiving end including, recelvlng means for receivlng and decoding sald oded maln voice signal and said coded additional data, and generating means for generating said main voice signal from said decoded main voice signal and for generating said sub-voice signal from said decoded additional data and said decoded main voice signal. Suitably said addi-tional data is a transfer function, an impulse response, or an approximated transfer func-tion or an approximated impulse response. Desirably said esti-~L2~

mating means estimates a transfer function between a sound sourceand a respective one of a plurality of microphones, each oE said microphones receiving as inputs a plurality of sou~d sources voice signals, and wherein said generating means generates a plu-rality of sound signals -from said transfer function and said main voice signal. Preferably said estimating means comprises an estimator~ said estimator estimating said transfer function or said impulse response between a sound source and a respectlve one of a plurality of microphones, each o~ said microphones receiving a plurality of sound sources, and said generating means includes means for generating a plurality of sound signals from said approximated transfer function or said approximated impulse response and at least one sound signal input to said microphones.
Suitably said additional data is identification data for identi-fying a transfer function, an impulse response, or a plurality of collected and stored data of approximated data of said transfer function or said impulse response.

In one embodiment of the present invention said esti-mating means comprises estimating circuit means for estimating a transfer f~mction or an impulse response between a sound source and a respective one of a plura~ity o-f microphones, each of said mlcrophones receiving as inputs a plurality of sound sources, partial extractor means for extracting an approximated transfer function or an approximated impulse response for said transfer function or said impulse response, table means for prestoring a collected approximated transfer function or a collected impulse response, and encoder means for comparing said approximated transfer function or said approximated impulse response which is extracted by said partial extractor with said collected approxi-mated transfer functlon or said collected approximated impulse response which is read out from said table means, and for encod-ing as a result a comparison therebetween; and wherein said gen-erating means comprises, decryptor means for decrypting said encoded comparison into a corresponding approximated transfer function or a corresponding approximated impulse response, and ~L26~S~6 means for generating a plurality of sound signals from a decrypted approximated transfer function or a decrypted approxi-mated impulse response, and from at least one sound source rec~ived by one of said microphones.

In another aspe~t thereof the present inv~ntion pro-vides a stereophonic voice transmission system for transmittlng a plurality of sound signals through a transmission line, said plu-rality of sound signals producing a main voice signal from a lo right channel and a sub-voice signal from a left channel, com-prising (a) a transmitting end including, estimating means for estimating said sub-voice signal from said main voice signal, and for evaluating a difference signal between an estimated voice signal and said sub-voice signal, or for evaluating a compressed difference signal obtained by causing a compressing means to com-press said difference signal, said estimating means controlling an estimation parameter, and transmitting means for transmitting said main voice signal, and said difference signal or said compressed di~ference signal to a receivlng end of another system; and (b) a receiving end including, receiving means for receiving and decoding said main voice signal, and said difference slgnal or said compressed difference signal, estimat-ing means, equivalent to that in said transmitting end, for eval-uating a decoded difference signal or a decoded expanded signal obtained by causing an expanding means to expand said compressed difference signal from said tran~mitting end, said estimating means producing an estimated signal, and generating means for generating said sub-voice signal by adding said difference signal or said expanded difference signal to said estirnated signal pro-duced by said estimating means. Suitably sald transmitting endcomprises storage means for storing said main voice signal and said difference signal or said compressed difference signal therein. Desirably said compressing means comprises an adaptive estimator or an adap-tive quantizer. Preferably said estimating means comprises a delay circuit means for delaying at least one sound signal of said plurality of sound signals, said estimating - 6a -means estimating other ones of said plurality of sound signals in a time reglon according to said at least one sound signal, and a subtractor means for subtracting each of said estimated other sound signals from said at least one sound signal to obtain a difference signal; wherein said recei~ing means comprises a gen-erator means for generating said other sound signals in the time region from said at least on~ sound signal, and an adder means for adding each of said other sound signals produced by said gen-erator to said difference signal. Suitably said system comprises a teleconferencing system~

In a still further aspect thereof the present invention provides a stereophonic voice transmission system for transmit-ting a plurality of sound signals through a transmission line, said plurality of sound signals producing a main voice signal from a right channel and a sub-voice signal from a left channel, comprising (a) a transmitting end including, estimating means for estimating said sub-vGice signal from said main voice signal, compressing means for obtaining a compressed difference signal from a difference signal representative of a difference between an estimated sub-voice signal and said sub-voice signal, means for evaluating said compressed difference signal and for control-ling an estimation parameter, and transmitting means for coding and transmitting said main voice signal, and said difference sig-nal or sald compressed dlfference slgnal, (b) a recelvlng endincluding, receiving means for receiving and decoding said main voice signal, and said difference slgnal or said compressed dif-ference signal, estimating means, equivalent to that in said transmitting end, for evaluating said decoded difference signal or said compr~ssed differ0nce signal and for producing an esti-mating sub-voice signal, and generating means for generating said sub-voice signal by adding said difEerence signal or said com-pressed difference signal to said estimated sub-voice signal.
~uitably said transmitting end comprises storage means for stor-ing said main voice signal, and said difference signal, or saidcompressed difference signal. Desirably said compressing means - 6b -~2~

comprises an adap-tive estlmator. Pre~erably sald compressing means comprlses an adaptive quantlzer, Suitably sald plurality of sound signals are collected by a plurality of microphones.
Desirably said estimating means comprises delay circuit means for delay~ng at least one sound signal collscted by one of said microphones, whPrein said estimatlng means comprises circuit means for estima-ting other remaining sound signals input to said microphones over a predetermined time period, and subtracter m~ans for subtracting said estimated remaining sound signals from said at least one sound signal to obtain a difference slgnal, and wherein said generating means comprises circuit means for gener-ating said remaining sound signals in the predetermined time period from at least one sound signal input to one of said micro-phones, and adder means for adding said remaining generated sound signals to said difference signal.
In another aspect thereof the present invention pro-vides a stereophonic voice transmission system for transmitting a plurality of sound signals, said plurality of sound signals pro-ducing a main voice signal from a right channel and a sub-voice signal from a left channel, through a transmission line, compris-ing estimating means for estimating additional data required for generating said sub-voice signal based on a predetermined formula using said main voice signal and said sub-voice signal as inputs;
and transmitting means for coding and transmitting said main voice signal and said additional data.

In a still further aspect thereoE the present invention provides a stereophonic voice reception system for receiving a plurality of sound signals through a transmission line comprising receiving means for receiving and decoding a coded main voice signal from a right channel and coded additional data representa-tive of a sub-voice signal from a left channel; and generating means for generating said main voice signal from said decoded maln voice signal and for generating said sub-voice signal from said decoded addltional data and said decoded main voice signal.

- 6c -lZ~S~6 The present invention will be described in detail by exemplifylng teleconferencing systems with reference to the accompanying drawlngs.

For the sake of simplicity, the transmission direction is represented by only one direction in the following descrlp-tion. A plurality of attendants or speakers (Al to A~ or ~1 to B4) in one of the conference rooms rarely speak simultaneously ln a normal conference atmosphere. Even if the attendants in one conference room start to speak simultaneously, necessity for stereophonic voice transmission thereof is low as compared with the case wherein an individual attendant speaks and his voice data is to be transmitted. In the following description, a case will be exemplified wherein sounds from each speaker in the form of voice information in a conference are transmitted as stereo-phonic voice data.

The principle of stereophonic voice transmission according to the present invention will be described below.

Speaker's voice X(~u) ~whereuJ is the angular fre-quency) in the form of a single utterance is input to right- and left-channel microphones IR and IL. In this case, an echo compo-nent from a wall is neglected. If the right- and - 6d -35~

left-channel transfer functions are deEined as GR(w) and GL(w), left- and right-channel voice signals YL(w) and YR(w) are defined as follows:

YL(W) = GL(w)o X(w) ---tl) YR(W) = ~R(W~. X(w) ---(2) Substitution of ~quation (2) into equatlon ~l) yields the following equati.on:

yL(w) = (GL(w~ / GR(W)- YR(W) (3) = G(w)- YR(w) ---(4) The above equation indicates that the voice signals of the right-and left-channels can be reproduced only if the transfer function G(w) and one of the GL signal channels are known.

According to the present invention, therefore, if the voice signal of one channel and a transfer function are transmitted (i.e., the voice signal of both channels need not ~e transmitted), the receiving end can reproduce the voice signals of right and left channels, thus realizing sterophonic voice transmission. In this case, the transfer function can be approximated by simple delay and attenuation if approximation precision is improved. The transfer function thus requires a smaller number o~ data as compared with the voice data YL(w) so as to achieve sterphonic voice transmission .

Fig. 1 schematically shows a sterophonic voice transmission system according to a first embodiment of the present invention.

The stereophonic voice transmission system comprises left-- and right-channel microphones 1L and 1R, loudspeakers 2L.and 2R, an esti.mator 20 for es-timating a-transEer function G(~), and a generator 21 for generating -the transfer function G(~) and the right~channel voice si.gnal YR(~) to produce the lef-t-channel voice signal YL(~).
Referring to Fig. 1, speaker's voice X(~) from the speaker A1 is input as the voice signal YR(~) at -the right-channel microphone 1R and the voice signal YL(~) a-t -the left-channel microphone 1L. The transmitting end transmits the right-channel voice signal YR(~) without modifications.
The left-channel voice signal YL(~) is input together with the right-channel voice signal YR(~) to the estimator 20.
The estimator 20 performs the following calculation to estimate the transfer function G(~):
G(~) = YL(~) / YR(~) The resultant transfer function G(O is transmitted.
The receiving end simply receives and reproduces the transmitted right-channel voice signal YR(~ he transfer function G(~) and the right-channel voice signal YR(~) are input to the mixer 21, and the mixer 21 performs the following operation:
(W) = G(~) ~ YR(~) so that the left-channel voice s.ignal is reproduced.
In this case, the transfer function G(~) is derived from eyuations (3) and (~):
G(~) = GL(~) / GR(~) where GR(W) and GL(~) are right- and left-channel transfer functions determined by the acoustic characteristics of the 6~

room and the speakers' positions. GR(w) and GL(w) are not influenced by speaker's voice X(w).

The -transfer function G(w) is stationary according ko equation (5) if the speaker is not changed to another location. The duration of most of the steady states are several hundred msec or longer.
!

The speaker's voice X(w) is not stationary, and therefore the left- and right-channel voice sig.nals YL(w) and YR(w) are not stationary according to equations (1) and (2). If the transf~r function G(w) is not very complicated, i.e., if an indoor reverberation time is not long the number of data required for the transfer function G(w) is smaller than for the voice signal YL(w). Therefore, the technique of this embodiment which transmits the transfer function G(w) is advantageous over the conventional stereophonic voice transmission technique which transmits the voice signal YL(w) itself.
Fig. 2 is a schematic view of a stereophonic voice transmission system accordlng to a second embodiment of the present invention.
This system comprises left- and right-channel microphones lL and lR, loudspeakers 2L and 2R, an estirnator 22 for estimating a transfer function G(w) or irnpulse response H(k), a partial extractor 23 for extracting an approximated transfer function G(w) or an approximated lmpulse response H(k), and a mixer 24 for producing the left-channel voice signal YL(w) using the approximated transfer function G(w) or the approximated impulse response H(k). The impulse response of the transfer function G(w) at the kth sampling timing is given as follows:

H(k) ~ (h-m(k), h-m-~l(k),~ --, h~(k) )T, for m > 0, and n > 0 where T is the transposed matrix and ho(k) is the center tap.

g ~2~

In the same manner as in the first embodiment, the transmitting end sends the coded right-channel voice signal. The transfer function G(w) or the impulse response H(k) derived from the right- and left-channel voice signals is estimated by the estimator 22. As shown in Fig. 3(A), the impulse response H(k) has a waveform having a duration between -1,000 to ~2~000 samples. For illustrative convenience, the speaker speaks for 500 msec on the average. In this case, assuming that each sampled value is quantized with 8 bits, and that the quantized signals are transmitted, a transmission rate required for transmitting this impulse response is as high as 8 X 3,000 X 2 =
48 kbps.

For this reason, as shown in Fig. 3(B), part (e.g., samples between -20 and +80) of the impulse response is extracted by the partial extractor 23 and is then transmitted. In this case, the transmission rate is 1.6 kbps which is desirably lower than 16 kbps of the ADPCM.
The sterophonic effects are determined by the phases and delay times of the voice. As shown in Fig. 3(C), only the position and magnitude of the main tap having the maximum magnitude among all taps are extracted and transmitted. In this case, the feellng of presence in the conference is slightly degraded. If eight bits are assigned to magnitude data and ano-ther eight bits are assigned to position data, the bit rate becomes 32 bps, thereby greatly reducing the bit rate.

At the receiving end, the transmitted right-channel voice signal is simply reproduced. The right-channel voice signal and the approximated transfer function ~w) or the approximated impulse response H(k) are mixed by the mixer 24 to reproduce the left-channel voice signal.

AS described above, if the transfer functlon G(w) is simple, lt is advantageous to send the transfer funtion G(w) ln the place of the leEt-channel voice slgnal YL(w).

However, if the transfer function G(w) is very complicated, it is less advantayeous to send transfer function G~w) in pl~ce of the left-channel voice signal YL(w). For this reason, in the second embodiment, the approximated transfer Eunction ~(w) or the approximated impulse response ~(k) is sent in place of the transfer function G(w), thereby reducing the number of data to be sent.

Fig. ~ is a block diagram of a transmltting end (A) and a receiving end (B) for performing transmission using an approximated impulse response.

The transmitting end (A) comprises microphones lL and lR, amplifiers 25L and 25R, A/D conver-ters 26L and 26R, and adaptive filter 27, delay circuits 28L and 28R, a subtracter 29, an approximator 30, a threshold detectore 31, level detectors 32a and 32b, a level ratio detector 33, an ADPCM circuit 34, and a transmitter circuit 35. The receiving end (B) comprises a separator 36, an ADPCM circuit 37, D/A converter 3~L and 38R, amplifiers 39L and 39~, loudspeakers 2L and 2R, a coefficient circuit 40, and a filter 41.

The left- and right-channel voice signals are input at the microphones lL and lR and amplified by the amplifiers 25L and 25R
to predetermined levels. The amplified signals are sampled by the A/D converters 26L and 26R at the sampling frequency of 1 kHz, thereby obtaining digital signals XL(k) and XR~k) at a kth sampling time.

s~

the center tap of the adaptive filter 27. The delayed signal is lnput as XL(k-d) to the subtracter 29. The subtracter 29 subtracts the output XL(k) of the adaptive filter rom XL(k-d) to produce an error or difference signal e(k). The adaptive filter uses the power of the error signal as an evaluation function and controls the tap coefficient so as to minimize the error power according to a known scheme such as identification by learining.

When learning of the adaptive filter advances, the following resul-ts can be obtained:

XL(k-d) - XL(k) therefore r the left-channel voice signal can be derived using th~
right-channel voice signal.

The approximator 30 compresses the tap coefficient data of the adaptive filter 27 to the required bit rate. Various approximation schemes may be proposed, as shown in Figs. 3(A) to 3(C). In this embodiment, the scheme in Fig. 3(B) is used. The approximator selects the main tap coefficient hlOO(k) among the adaptive filter tap coefficients hlOO(k) among the adaptive filter tap coefficients hl(k) to h256(k) and quantizes 23 tap coefficients from h89(k) to hlll(k) into 8-bit codes at the bit rate of 1 kbps. The 8-bit main tap code and an 8-bit header representing the start of the data string are formatted into a frame in Fig. 5, and the resultant frame data is sent as additional data to the transmitter circuit 35.
Tap coefficient approximation in the adaptive filter 27 is performed whenever the speaker is changed. The threshold detector 31 detects that the error signal e(k~ exceeds a threshold value and then becomes lower than the threshold value, so that a change in speaker is detected.

~2~

This detec-tion will be described in more de-tail with reference to Fig. 6.

Referring to Fig. 6., the speaker ls changed :Erom the speaker A
to the speaker B after the lapse of 400 msec. In this case, the function is changed from GA(w) to GB(w), and the impulse response is changed from H~k) to HB(k). Th~ adaptl~e filter ~7 learns to follow the change in impulse response, and the tap coefficient is changed. Therefore, XL(k-d) ~ XL(k) is temporarily establlshed.
The level of the error single e~k) ls increased. Thereafter, the level of the error signal e~k) is decreased below the threshold value, and the additional data is updated after the lapse o* 600 msec as shown in Fig. 6.
In the transmitting end (A) in Fig. 5., the learning time of the adaptive filter 27 is set to be 200 msec or less. The additional data is updated 200 msec after the actual change in speakers.
For this reason, the main data voice signal XR(k) is delayed by the delay circuit 29R by 200 msec so as to be synchronized with the additional data.

Since the additional data includes some of thP adaptive filter tap coefficients, direct transmission thereof lowers the level of the resultant left-channel voice signal XL(k) at the receiving end. In order,to prevent this, the level dekectors 32a and 32b and the level ratio detector 33 cooperate to detect a level ratio of XR(k) to ~Ltk), and the approximator 30 corrects the approximated tap coefficient, thereby optimizing the level of the resultant left-channel voice signal XL(k). Thereafter, the additional data is sent as a l-kbps frame to the transmitter circuit 35'.

The transmit-ter circuit 35 mixes the additional data and main 35 data obtained by converting the right-channel voice signal from ~2 E;~
the delay circuit 28R into a 15-kbps ADPCM code -Erom the ADPCM
circuit 34 to produce 16-kbps sterophonic voice data. This volce data is transmittsd to -the receiving end (B) through a transmission line.

The stereophonic voice data sent to the receiving end ( B ~ iS
separated by the separator 36 into the 15-kbps right-channel voice signal of the main data and the l-kbps additional data.
The right-channel voice signal is decoded ~6i 3546 by the ADPCM circult 37, and the decoded signal is converted by the D/A converter 38R to an analog signal. The analog signal is then amplified by th~ amplifier 39R and produced at the loud-speaker 2R.

on the other hand, the additional data is converted by the coefficient circuit 40 into 256 tap coefficients. These coefficients are suppll2d to the filter 41. The filter 41 uses the tap coefficients and the right-channel voice signal to pro-duce a time compressed left-channel voice signal. The resultant left-channel voice signal is converted by the D/A converter 3~L, amplified by the amplifier 39r" and produced at the loudspeaker 2L in the same manner as the right-channel voice signal. Thus, the transmitted tlme compressed data is time expanded by the receiving end.
The above embodiment can be realized by using the cur-rent techniques, and the number of stereophonic signals necessary to be transmitted can be greatly reduced.

In this embodiment, the adaptive filter 27 and the fil-ter 41 comprise transversal filters of time region processing.
However, these filters may be replaced with fllters of frequency region processing to achieve the same effect as described above.

In addition, a correlator may be used in place of the adaptive filter to detect a tap having a maximum correlation value.

Fig. 7 schematically shows a configuration of a stereo-phonic voice transmission system according to a third embodiment of the present invention. The stereophonic voice transmission system comprises left- and right-channel ~z~

microphones 1L and 1R, loudspeakers 2L and 2R, an estimator 22 for estimating a -transfer function G(~) or an impulse response H(k), a partial extractor 23 for extracting an approximated transfer function G(~) or an approximatecl impulse response ~(k), tables 42a and 42b for prestoring reference approximated transfer functions ~(~) and reference approximated impulse responses ~(k), an encoder 43, a decryptor 44, and a generator 24 using the right-channel voice signal and the approximated transfer function G(~) or the approximated impulse response H(k) to produce the left-channel voice signal.
In this embodiment, in the transmitting end, the reference approximated transfer function G(~) or the reference approximated impulse response H(k) from the table 42a is compared by the encoder ~3 with the approximated transfer function G(~) or the approximated impulse response H(k) extracted from the partial extractor 23. A code g representing the highest similarity between the prestored and the extracted data is transmi-tted. At the receiving end, the decryptor 44 receives the code g and the data read out from the table 42b to produce the approximated transfer function G(~) or the approximated impulse response H(k).
The output Erom the decryptor 44 and the right-channel voice signal are mixed by the mixer 24 to produce the left-channel voice signal. According to this embodimen-t, if the acoustic characteristics in the conference room are known,the number oE transmisslon signals are reduced, while the feeling of presence in a conference is maintained.

~8S~6 Fig.8 schematically shows a s-terophonic volce transmission system according to a fourth embodiment of the present inventionO

A transmitting end in this stereophonic voice transmisslon system comprisess left- and right-channel microphones 101~ and lOlL, a delay circuit 120 for delaying the left channel microphone input voice signal, an estimator 121 for producing an estimated left-channel voice signal y~k) from the right-channel voice signal x(k) in the time region, and a subtracter 122 for subtracting the estimated left-channel voice signal y~k) from the left-channel voice signal y(k). A receiving end comprises left- and right-channel loudspeakers 102L and 102R, a mixer 123 for producing an estimated left~channel voice signal y(k) from the right-channel voice signal x(k), and an adder 124 for adding a difference signal to the left-channel voice signal y(k) estimated by the mixer 123~ The transmitting and receiving ends are connected through transmission lines 125 and 126. It should be noted x(k) and y(k) show values of the left- and right-channel voice signals at the kth sampling time.

Referring to Fig,8, voice X(w) output by a speaker Al is input to the microphones lOlR and lOlL, and microphone input signals YR(w) and YL(w) are represented by transfer functions FR(w) and GL(w) determined by the propagation delays and the accoustic characteristics of the room. In this case, w is the angular frequency.

YR(W) = FR(w) .X(w) ---(101) YL(W) = GL(I,)) ' X(~ - (102) The left microphone input signal YL(W) is delayed with C(~) in the delay circuit 120 so as to guarantee the cause-and- effect relationship in the estimator 121 and is represented by the -transfer function FL(I~)) for an arrangement includiny components up to the delay circuit 120:

YL(W) = C(l~ GL(~ X(W) FL(~ X(W) ___ (103) The left-channel voice signal YL(~) is input to the subtracter 122.
The estimator 121 uses the left- and right-channel voice signals YR(~)) and YL(~)) to estimate the transfer func-tion G(W) for deriving the lef-t-channel voice signal YL(~) from the right-channel voice signal YR(lJ) as follows:
G(l~)) = FL(W) / FR(l.)) __- (104) so that -the estimated -transfer function G(~)) can be obtained.
The es-timator 121 mainly includes an adaptive transversal Eilter I21a for calculating the estimated left-channel voice signal y(k) in the time region of Fig. 9(~), and a co:rrection circuit 121b for sequentially updating the estimated input response EI(k) of the transfer function G( ) shown in Fig. 9(B). The adaptive transversal filter 121a and the correc-tion circuit 121b are operated in synchronism with clocks.
The adaptive transversal filter 121a comprises n tap 1 9 ~6~

shif-t registers 127, mul-tipliers 128 for multiplying the components of the estimated impulse response H(k) with the corresponding components of the right-channel voice signal X(k), and an adder 129 for adding outputs of the multipliers 128. The components of the right-channel voice signal input to-the shift registers each having a delay time corresponding to one sampling time so that time-serial vector X(k) is produced as follows:
X(k) = (X(k), X(k-1),...,X(k-N+1))T
--- (105) where T is the transposed vector.
If the estimated impulse response obtained by approximating the estimated transfer function G(~) in the time region is given as follows:
H(k) = (h1(k), h2(k), h3(k),...,hN(k))T
--- (106) an estimated value y(k) of the left-channel voice signal y(k) can be obtained below:
y(k) = H(k)T- X(k) ___ (107) In this case, if the impulse response series H of the transfer function G(~) is expresse.d as:
H = (h1, h2,...., hN)T --- (108) and the transfer function can be effectively estimated as:
H(k) = H __- (109) then the left-channel voice signal estimated value y(k) is an approximated of the actual left-channel voice signal y(k).
Estimation of the impulse response H(k) in the ! 20 ~ 5 ~ ~
estimator 121 i5 performed by causing the correction circuit 121b to sequentially perform the following operation:
H(k+1) = H(k) ~ ~ e(k) ~X(k) /Il~X(k)ll2 --- (110) for H(0) = 0 The above algorithm is a ~r-K3w identification technique by learning. In equation (110), e(k) is the output from the subtracter 122 and given as follows:
e(k~ = y(k) - y(k) --- (111) and is the coefficient for determinirlg a convergence rate and stability.
As a result, only the difference signal e(k) is sent as the left-channel data at the end of/above operations.
The receiving end has the generator 123 having the same arrangement as that of the estimator 121. The generator 123 sequentially traces the estimation results of the left-channel voice signals from the transmitting end.according to the right-channel voice signal X(k) and the difference signal e(k) and calculates the est.imated left-channel voice signal y(k) (YL(~) in the frequency region basis) from the following equation:
~ (k) = ~(]c)T ~ X(k) __- (112) ~ (k~1) = H(k) -~ ~ e(k)~X(k) /Il.X(k)ll2 --- (113) where ~(k) is the estimated tap coefficient series in the generator and H(0) = 0.
Equations (112) and (113) in the receiving end are the same as equations ~107) and ~110) in the transmitting end, 21 ~ S ~ ~

so that the estimated values y(k) and y(k) of -the transmission and reception of the lef-t-channel voice signal are given as follows:
y(k) = y(k) ___ (114) The left-channel output YL in the receiving end, thereEore, is given as a sum of the estimated value y(k) and the difference signal e(k) from the adder 124:
YL(k) = y(k) + e(k) = y(k) + e(k) = y(k) --- (115) As a result, the left-channel voice can be properly reproduced.
The estimator 121 and the generator 123 in the s-tereophonic voice transmission system are adaptive transversal filters of the time region. However, these filters may be replaced with adaptive filters of the frequency region to obtain the same result as described above.

~cc ~
According to this scheme, the more'approximation of the estimated values ~(k) and y(k) of the left-channel voice i~-~ff~r-a-t-e, the less the power of e(k) becomes. The number of bits of e~k) can be smaller than that of y(k).
A fifth embodiment of the present invention will be described below.
Fig. 10 schematically shows the configuration of a stereophonic voice transmission system of this embodiment.
In this embodiment, after a correlation component between the right- and 1eft-channel voice signals is 1' ~26~5~

removed, the data is coded and decoded according to the ADPCM scheme. Transmission and storage of stereophonic voice can be performed using a small number of data. The same reference numerals as in Fig. 8 (the fourth embodiment) denote the same parts in Fig. 10.
~ ~r~ to Fig. 10, a transmitting end comprises a right-channel ADPCM encoder unit 130 and a left-channel ADPCM encoder unit 131~ A receiving end comprises a right-channel ADPCM decoder unit 132 and a left-channel ADPCM
decoder unit 133.
An interchannel correlation eliminator unit 134 has the subs-tantially same function as that of the estimator 121 in Eig. 8, and an interchannel correlation adder unit 135 has the substantially same function as ~ha-t mixer 124 in Fig. 8.
Single utterance voice S(~) (where ~ is the angular frequency) is input at right- and left-channel microphones 1R and 1L with righ-t- and left-channel transfer functions FR(~) and FL(~) determined by the acoustic characteristics of the room and is converted to right- and left-channel voice signals X(~) and Y(~).
The interchannel correlation eliminator unit 134 in the transmitting end causes the ADPCM encoder units 130 and 131 (to be described in detail later) to produce the estimated transfer function G(~) according to an encoded right-channel voice signal X1(~) and a difference signal E1(~) as follows:

G(~) = Y(~) / X(~) = FL(~)S(~)/FR(~)S(W) 23 ~L%61~5~16 RL(~) /FR(~ - (116) A predicted value Y(~) is derived from:

(~) G(~ ) __- (117 ~ ~ C'~ r ~ c~ e,~
and is ~t-r~e-~d by the s~bs-t--E~t-~r 122 from -the left-channel voice signal Y(~) to produce a predicted difference signal E(~). The signal E(~) is then input to the left-channel ADPCM encoder unit 131.
The encoder unit 131 sends the ADPCM coded difference signal F(~) onto a transmission line 126. The encoder unit 131 has the same decoding function as the ADPCM encoder unit 133. A decoded difference signal E1(~) of the difference signal F(~ ) decoded by the decoding function of -the encoder unit 131 is input to the correlation eliminator uni-t 134 The right-channel ADPCM encoder uni-t 130 sends the ADPCM encoded right-channel voice signal D(~) onto a transmission line 125. The right-channel ADPCM encoder unit 130 has the same decoding function as that of the ADPCM
decoder unit 132. A decoded signal X1(~ of the right-channel voice signal D(~) decoded by the decoding function of -the encoder unit 130 is input to the correlation eliminator unit 134.
At the receiviny end, D(~) and F(~) are ADPCM decoded by the ADPCM decoder units 132 and 133, and the interchannel 135 performs the following calculations:
(~) G(~) X1(~) --- (118) . YL(~)= E1(~ Y(~) --- (121) therefore -the left-channel voice signal YL(~) is reproduced.
In other words, the interchannel correlation adder unit 135 .
,,,
2~ 5~6 receives the decoded X1(~) and E1 and estimates ~
E`ig. 11 is a detailed circuit diagram of this embodiment.
A -transmitter end 200 in -the stereophonic voice transmission system cornprises a right-channel ADPCM encoder unit 130 for ADPCM encoding right-channel voice input at a right-channel microphone 101R, a left-channel ADPCM encoder unit 131 for ADPCM encoding left-channel voice y(-t) input at a left-channel microphone 101L, and an interchannel correlation eliminator unit 134 for predicting the left-channel voice signal from the right-channel voice signal and eliminating the interchannel correlation component from the left-channel voice signal.
The ADPCM si gnals D( k) and F(k) outpu-t from the transmitter end 200 are sent to a receiving end 300 through transmission lines 125 and 126.
The receiving end 300 in the stereophonic voice transmission system comprises a right-channel ADPCM decoding unit 132 for decoding the right-channel ADPCM code D(k) and reproducing the right-channel voice signal X1(t), a left-channel ADPCM decoder unit 133 for decoding the left-channel ADPCM code F(k) and reproducing the left-channel correlation adder unit 135 for predicting the left-channel voice signal from the right-channel voice signal and adding the interchannel correlation component to the ADPCM decoded voice signal.
The respective components are described in detail below.

~2~8S~6 Right-Channel ADPCM Encoder ~nit 130:
The right-channel ADPCM encoder unit 130 comprises an A/D converter 201, a subtracter 203, an adaptive quantizer 205, an adaptive dequantizer 207, an adder 209, and an estimator 211.
The digitized right-channel voice signal ~(k) is subtracted by the estimator 211 from the predicted right-channel predicted voice signal x(k) to obtain a predicted difference signal d(k) representing a decrease in power lower than that of x(k) due to prediction. The difference signal d(k) is encoded by the adaptive quantizer 205 to an ADPCM code having a bit rata of~7 about 32 kbps so as to adaptively vary the quan-tization step according to the c~
ammp~i-t~e of the input signal.
The estimator 211 causes the adder 209 to add the reproduced predicted difference signal d1(k) decoded by the adaptive dequantizer 207 and the output ~(k) from the estimator 211 to inpu-t the same right-channel reproduced voice signal X1(k) as in the right-channel ADPCM encoder unit 132 in the receiving end. Adaptive filtering is perfo:rmed to minimize the power of the reproduced predicted difference signal d1(]c).

Left-Channel ADPCM Encoder Unit 131:
The left-channel ADPCM encoder unit 131 comprises an A/D converter 213, a delay circuit 215, a subtracter 217, an adaptive quantizer 319, an adaptive dequantizer 321, an adder 223, and an estimator 227.

~2~85~1~

The interchannel correlation eliminated signal e(k) obtained by eliminating the correlation component in the in-terchannel correlation eliminator unit 13~ from the lef-t-channel voice signal delayed by the delay circuit 215 is ADPCM encoded in the same manner as the right-channel voice signal.
The above-mentioned delay operation guarantees the cause-and-effect relationship of the right- and left-channel voice signals X(k) and y(k) (even if voice reaches the left-channel microphone faster than the voice arrival to the right-channel microphone, the left-channel voice signal is delayed for the input to the interchannel correlation unit 134).
The self-correlated voice component is eliminated by the estimator 227, and the correlated cornponent of the right-channel voice signal mixed with the left-channel voice signal is eliminated by the interchannel correlation eliminator unit 134. Therefore, -the left-channel voice signal can be compressed to an ADPCM code (e.g., about 16 kbps) shorter than -the bit length of the right-channel voice data.

Interchannel Correlation Eliminator Unit 134:
The interchannel correlation eliminator unit 134 comprises an estimator 229 and a subtracter 231. The estimator 229 receives the right-channel reproduced voice signal X1(k) and produces the interchannel correlation component y(]s). The estimator 229 performs adaptive 8~

Filtering to op-timlze the characterls-tics of the filter characteristics so as to minimize the power of the left-channel predicted difference signal el(k) in the receiving end 300.

Right-Channel ADPCM Decoder Unit 132:

The right-channel ADPCM decoder unit 132 comprises an adaptive dequantizer 233, an adder 235 an estimator 237, a delay circuit 239, and a D/A converter 2Al.

The received ADPCM code D(k) is converted into the right-channel reproduced predicted difference signal dl by the adaptive dequantizer 233. The signal dl is added by the adder 235 to the right channel predicted signal ~(k) output from the estimator 237, thereby producing the right-channel reproduced voice signal Xl(k)-Thereafter, in order for the encoder 200 to compensate for the left-channel delay effected by the delay circuit 239, the same length of delay tlme is added to the right-channel voice signal, is converted by the D/A converter 241, and is output at the loudspeaker 102R.

The estimator 237 receives the right-channel reproduced voice signal Xl and performs adaptive filtering for producing a minimized predicted right-channel difference signal dl(k).

The estimator 237 is the same as the estimator 211 in the right-channel ADPCM encoding unit 130 and receives the same signal as therein. Therefore, the transmitting and receiving ends 200 and 300 output the same predicted signals 8~i4~

Left-Channel ADPCM Decoder Unit 133:
The left-channel ADPCM decoder unit 133 comprises an adap-tive dequantizer 243, an adder 245, an estimator 247, and a D/A converter 249.
In the same manner as in the right-channel operation, the lef-t-channel predicted difference signal e1(k) at the receiving end is produced from the received ADPCM code F~k).
Thereafter, this signal is added by the interchannel correlation adder unit 135 to the correlation component y(k) to obtain the left-channel reproduced voice signal Y1(k)~
This signal is converted by the D/A converter 249 into an analog signal. The analog signal is output at the loudspeaker 102L.

Interchannel Correlation Adder Unit 135:
The interchannel correlation adder unit 135 comprises an estima-tor 251 and an adder 253. The adder unit 135 receives the right-channel reproduced voice signal X(k) and causes the estimator 251 to produce the interchannel correlation component y(k).
The estimator 251 has the same arrangement as in the transrnitting end 200. The estimator 251 comprises an adaptive filter for learning to minimize the power of the receiving-end left-channel predicted difference signal e1 in the same manner as in the transmitting end 200, thereby obtaining the same predicted value y(k) as in the transmitting end 200.

The estimator 229 (211, 227, 237, 247, or 251), the ~z~s~

adaptive quantizer 205 (219), and the adaptive dequantlzer 233 (207, 221, or 243) will be described in more detail.

Estimator 229:

Extensive studies have made on -the types and arrangements of the es-timator 229. For example, the ca_~ ~e,~o~
estimator~ s~ prediction in the time region or the frequency region (e.g., FFT or Fast Fourier Transform).
According to the present invention, any adaptive filter may be employed as the estimator 229. However, an adaptive transversal filter of the time region in Fig. 8 is used to constitute the interchannel estimator 229.
In the following description, XR(~) and e(k) are substituted by X1(~) and e1(k), respectively.
The estimator 229 mainly includes the adaptive transversal filter 121a for calculating the estimated left-channel voice signal y(k) in -the time region of Fig. 8A and the correlation circuit 121b for sequentially correcting the estimated impulse responses H(k) of the interchannel transfer function G(~). The adaptive transversal filter 121a and the correction circuit 121b are operated in synchronism wi-th the sampling clocks.
The adaptive transversal filter 121a comprises n -tap shlft registers 127, n multiplifers 128 for multiplying the components of the estimated impulse response H(k) with -the corresponding components of the right-channel voice signal X(k), and an adder 129 for adding the outputs from the multipliers 128.

~L26~

In the estimator 229, the respective components of the right-channel voice signal X1 are input to the shift registers 127 each having a one--sampling delay time 50 tha-t the time serial vect~r is produced as follows:

1(k) = (X1(k)~ Xl(k~ x1(k-N~1))T
--- (120) where T is the transposed vector.
On the other hand, if the estimated impulse response obtained by approximating the estimated transfer function G(~) in the time region is defined as follows:
H(k) = (h1(k), h2(k), h3(k),~...hN(k))T
--- (121) the estimated value y(kj of the left-channel voice signal y(k) is given as follows:
~(k) = H(k)T^ X1(k) --- (122) In this case, if the impulse response series of the transfer function G(~) is H = ~h1, h2, ....... , hN)T --- (123) and the transfer function can be effectively estimated as H(k) '~ H __- (12~) then t:he left-channel voice signal estimated value y(k) is a good approximate of the actual left-channel voice signal y(k).

Estimation of the impulse response H(k) in the ~ O~e~
estimator 229 is ~e-r-~r-~ by correction circuit 121b according to -the following calculation for sequentially minimizing the power of e1(k):
H(k~1) = H(k) -~ e1(k)- X1(k)/ 1I X1 (k.)ll2 \

~z~
---(125) for H(0) = 0 This algorithm is known as the Leaning Identification Method.
In equation (125), el(k) ls a reproduced signal at the recelving end of an output (equation (126~) from the subtracter in Fig.ll:

e(k) = Y(k) - Y(k~ (126) and is the coefficient for determining the convergence rate and stability in equation (125).

Adaptive Quantizer 205 and Adaptive Dequantizer 233:

Fig.12 shows the configuration of the adaptive quantizer 205 and the adaptive dequantizer 233.

The adaptlve quantizer 205 comprises a divider 255, an encoder 257, a decoder 259, a multiplier 261, and a power detector 263.
The adaptive dequantizer 207 comprises a decoder 265, a multiplier 267 and a power detector 269.

For example, a 14-bit linear predicted difference signal d(k) in the transmitting end is divided by a quantization step~(k) and quantized. The quantized signal is encoded by the encoder 257 to an ADPCM code D(k) which is then sent onto the transmission line 125.

The signal decoded by the decoder 259 is multiplied by the multipller 261 with the quantized step ~ (k) to produce a dequantized s.Lgnal dl(k). The power detector 263 detects the power of the signal d1(k). By detecting this power, the quantization step ~ (k) is determined.

~2~5~i On the other hand, in the adaptive dequantizer 233~ the ADPCM
cod~ is decoded by the decoder 265, and the decoded signal is multiplied by the multiplier 267 with the quantization step (k), thereby producing the 14-bit receiving-end linear predicted 5 difference signal. The quantization step ~ ~k) is determined by detectlng the power of dl(k) in the power detector 269 in the same manner as described above.

The above operations can be performed in the logarithmic region.

According to the fifth embodiment, the main voice slgnal and the di-fference signal are ADPCM encoded and the encoded signals are transmitted in sterophonic transmission. ~s compared with the sterophonic voice transmission, stereophonic transmission can be achieved by a smaller number of signals.

It is also possible to convert Adaptive Predictive Coding CAPC
signals into stereophonic signals in the same manner as described above. In this case, the eliminated correlation coefficient in the ADPCM scheme is also sent to the receiving end.

In the fourth and flfth embodiments, the estimation algorithm is exemplified by identification by learning. However, a steepest descent method maybe used in place of identification by leaning.
In the above embodiments, two-channel stereophonic voice transmission is exemplified. However, the present invention is not limited to such transmission, but may be - ~26~5~6 extended to stereophonic voice transmisslon of three or more channels and ls also appllcable to voice storage as well as voice transmission .

Claims (19)

THE EMBODIMENTS OF THE INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. A stereophonic voice transmission system for trans-mitting a plurality of sound signals over a plurality of channels through a transmission line, said plurality of sound signals pro-ducing a main voice signal from a right channel and a sub-voice signal from a left channel, comprising (a) a transmitting end including, estimating means for estimating additional data required for generating said sub-voice signal based on a prede-termined formula using said main voice signal and said sub-voice signal as inputs, and transmitting means for coding and transmit-ting said main voice signal and said additional data; and (b) a receiving end including, receiving means for receiving and decod-ing said coded main voice signal and said coded additional data, and generating means for generating said main voice signal from said decoded main voice signal and for generating said sub-voice signal from said decoded additional data and said decoded main voice signal.
2. A system according to claim 1, wherein said addi-tional data is a transfer function, an impulse response, or an approximated transfer function or an approximated impulse response.
3. A system according to claim 1 or 2, wherein said estimating means estimates a transfer function between a sound source and a respective one of a plurality of microphones, each of said microphones receiving as inputs a plurality of sound sources voice signals, and wherein said generating means gener-ates a plurality of sound signals from said transfer function and said main voice signal.
4. A system according to claim 2, wherein said esti-mating means comprises an estimator, said estimator estimating said transfer function or said impulse response between a sound source and a respective one of a plurality of microphones, each of said microphones receiving a plurality of sound sources, and said generating means includes means for generating a plurality of sound signals from said approximated transfer function or said approximated impulse response and at least one sound signal input to said microphones.
5. A system according to claim 1, wherein said addi-tional data is identification data for identifying a transfer function, an impulse response, or a plurality of collected and stored data of approximated data of said transfer function or said impulse response.
6. A system according to claim 1, wherein said esti-mating means comprises estimating circuit means for estimating a transfer function or an impulse response between a sound source and a respective one of a plurality of microphones, each of said microphones receiving as inputs a plurality of sound sources, partial extractor means for extracting an approximated transfer function or an approximated impulse response for said transfer function or said impulse response, table means for prestoring a collected approximated transfer function or a collected impulse response, and encoder means for comparing said approximated transfer function or said approximated impulse response which is extracted by said partial extractor with said collected approxi-mated transfer function or said collected approximated impulse response which is read out from said table means, and for encod-ing as a result a comparison therebetween; and wherein said gen-erating means comprises, decryptor means for decrypting said encoded comparison into a corresponding approximated transfer function or a corresponding approximated impulse response, and means for generating a plurality of sound signals from a decrypted approximated transfer function or a decrypted approxi-mated impulse response, and from at least one sound source received by one of said microphones.
7. A stereophonic voice transmission system for trans-mitting a plurality of sound signals through a transmission line, said plurality of sound signals producing a main voice signal from a right channel and a sub-voice signal from a left channel, comprising (a) a transmitting end including, estimating means for estimating said sub-voice signal from said main voice signal, and for evaluating a difference signal between an estimated voice signal and said sub-voice signal, or for evaluating a compressed difference signal obtained by causing a compressing means to com-press said difference signal, said estimating means controlling an estimation parameter, and transmitting means for transmitting said main voice signal, and said difference signal or said compressed difference signal to a receiving end of another system; and (b) a receiving end including, receiving means for receiving and decoding said main voice signal, and said dif-ference signal or said compressed difference signal, estimating means, equivalent to that in said transmitting end, for eval-uating a decoded difference signal or a decoded expanded signal obtained by causing an expanding means to expand said compressed difference signal from said transmitting end, said estimating means producing an estimated signal, and generating means for generating said sub-voice signal by adding said difference signal or said expanded difference signal to said estimated signal pro-duced by said estimating means.
8. A system according to claim 7, wherein said trans-mitting end comprises storage means for storing said main voice signal and said difference signal or said compressed difference signal therein.
9. A system according to claim 7, wherein said com-pressing means comprises an adaptive estimator or an adaptive quantizer.
10. A system according to claim 7, wherein said esti-mating means comprises a delay circuit means for delaying at least one sound signal of said plurality of sound signals, said estimating means estimating other ones of said plurality of sound signals in a time region according to said at least one sound signal, and a subtractor means for subtracting each of said esti-mated other sound signals from said at least one sound signal to obtain a difference signal; wherein said receiving means com-prises a generator means for generating said other sound signals in the time region from said at least one sound signal, and an adder means for adding each of said other sound signals produced by said generator to said difference signal.
11. A system according to claim 1 or 7, wherein said system comprises a teleconferencing system.
12. A stereophonic voice transmission system for trans-mitting a plurality of sound signals through a transmission line, said plurality of sound signals producing a main voice signal from a right channel and a sub-voice signal from a left channel, comprising (a) a transmitting end including, estimating means for estimating said sub-voice signal from said main voice signal, compressing means for obtaining a compressed difference signal from a difference signal representative of a difference between an estimated sub-voice signal and said sub-voice signal, means for evaluating said compressed difference signal and for control-ling an estimation parameter, and transmitting means for coding and transmitting said main voice signal, and said difference sig-nal or said compressed difference signal, (b) a receiving end including, receiving means for receiving and decoding said main voice signal, and said difference signal or said compressed dif-ference signal, estimating means, equivalent to that in said transmitting end, for evaluating said decoded difference signal or said compressed difference signal and for producing an esti-mating sub-voice signal, and generating means for generating said sub-voice signal by adding said difference signal or said com-pressed difference signal to said estimated sub-voice signal.
13. The system according to claim 12, wherein said transmitting end comprises storage means for storing said main voice signal, and said difference signal, or said compressed dif-ference signal.
14. The system according to claim 12, wherein said com-pressing means comprises an adaptive estimator.
15. The system according to claim 12, wherein said com-pressing means comprises an adaptive quantizer.
16. The system according to claim 12, wherein said plu-rality of sound signals are collected by a plurality of micro-phones.
17. The system according to claim 16, wherein said estimating means comprises delay circuit means for delaying at least one sound signal collected by one of said microphones, wherein said estimating means comprises circuit means for esti-mating other remaining sound signals input to said microphones over a predetermined time period, and subtracter means for sub-tracting said estimated remaining sound signals from said at least one sound signal to obtain a difference signal, and wherein said generating means comprises circuit means for generating said remaining sound signals in the predetermined time period from at least one sound signal input to one of said microphones, and adder means for adding said remaining generated sound signals to said difference signal.
18. A stereophonic voice transmission system for trans-mitting a plurality of sound signals, said plurality of sound signals producing a main voice signal from a right channel and a sub-voice signal from a left channel, through a transmission line, comprising estimating means for estimating additional data required for generating said sub-voice signal based on a prede-termined formula using said main voice signal and said sub-voice signal as inputs; and transmitting means for coding and transmit-ting said main voice signal and said additional data.
19. A stereophonic voice reception system for receiving a plurality of sound signals through a transmission line compris-ing receiving means for receiving and decoding a coded main voice signal from a right channel and coded additional data representa-tive of a sub-voice signal from a left channel; and generating means for generating said main voice signal from said decoded main voice signal and for generating said sub-voice signal from said decoded additional data and said decoded main voice signal.
CA000517216A 1985-08-30 1986-08-29 Stereophonic voice signal transmission system Expired CA1268546A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP60191746A JPH0758938B2 (en) 1985-08-30 1985-08-30 Stereo signal transmission encoding / decoding method, encoding device and decoding device
JP60-191746 1985-08-30
JP8284086A JPH0761043B2 (en) 1986-04-10 1986-04-10 Stereo audio transmission storage method
JP61-082840 1986-04-10

Publications (2)

Publication Number Publication Date
CA1268546C CA1268546C (en) 1990-05-01
CA1268546A true CA1268546A (en) 1990-05-01

Family

ID=26423862

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000517216A Expired CA1268546A (en) 1985-08-30 1986-08-29 Stereophonic voice signal transmission system

Country Status (2)

Country Link
US (1) US4815132A (en)
CA (1) CA1268546A (en)

Families Citing this family (107)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5075880A (en) * 1988-11-08 1991-12-24 Wadia Digital Corporation Method and apparatus for time domain interpolation of digital audio signals
US4922536A (en) * 1988-11-14 1990-05-01 Massachusetts Institute Of Technology Digital audio transmission for use in studio, stage or field applications
US5228076A (en) * 1989-06-12 1993-07-13 Emil Hopner High fidelity speech encoding for telecommunications systems
US5020098A (en) * 1989-11-03 1991-05-28 At&T Bell Laboratories Telephone conferencing arrangement
US5278909A (en) * 1992-06-08 1994-01-11 International Business Machines Corporation System and method for stereo digital audio compression with co-channel steering
US5335011A (en) * 1993-01-12 1994-08-02 Bell Communications Research, Inc. Sound localization system for teleconferencing using self-steering microphone arrays
US5555310A (en) * 1993-02-12 1996-09-10 Kabushiki Kaisha Toshiba Stereo voice transmission apparatus, stereo signal coding/decoding apparatus, echo canceler, and voice input/output apparatus to which this echo canceler is applied
DE4320990B4 (en) * 1993-06-05 2004-04-29 Robert Bosch Gmbh Redundancy reduction procedure
US5664021A (en) * 1993-10-05 1997-09-02 Picturetel Corporation Microphone system for teleconferencing system
US5563951A (en) * 1994-07-25 1996-10-08 Interval Research Corporation Audio interface garment and communication system for use therewith
DE19526366A1 (en) * 1995-07-20 1997-01-23 Bosch Gmbh Robert Redundancy reduction method for coding multichannel signals and device for decoding redundancy-reduced multichannel signals
GB2303516A (en) * 1995-07-20 1997-02-19 Plessey Telecomm Teleconferencing
US7012630B2 (en) * 1996-02-08 2006-03-14 Verizon Services Corp. Spatial sound conference system and apparatus
DE19628293C1 (en) * 1996-07-12 1997-12-11 Fraunhofer Ges Forschung Encoding and decoding audio signals using intensity stereo and prediction
US6252965B1 (en) * 1996-09-19 2001-06-26 Terry D. Beard Multichannel spectral mapping audio apparatus and method
US6529602B1 (en) * 1997-08-19 2003-03-04 Walker Digital, Llc Method and apparatus for the secure storage of audio signals
US6125115A (en) * 1998-02-12 2000-09-26 Qsound Labs, Inc. Teleconferencing method and apparatus with three-dimensional sound positioning
US6173059B1 (en) 1998-04-24 2001-01-09 Gentner Communications Corporation Teleconferencing system with visual feedback
US6961432B1 (en) * 1999-04-29 2005-11-01 Agere Systems Inc. Multidescriptive coding technique for multistream communication of signals
US6533617B1 (en) 2000-01-07 2003-03-18 J. D'addario & Company, Inc. Electrical plug connectors
US6296525B1 (en) 2000-01-07 2001-10-02 J. D'addario & Company, Inc. Electrical plug and jack connectors
US6973184B1 (en) * 2000-07-11 2005-12-06 Cisco Technology, Inc. System and method for stereo conferencing over low-bandwidth links
US7644003B2 (en) * 2001-05-04 2010-01-05 Agere Systems Inc. Cue-based audio coding/decoding
US7292901B2 (en) * 2002-06-24 2007-11-06 Agere Systems Inc. Hybrid multi-channel/cue coding/decoding of audio signals
US7583805B2 (en) * 2004-02-12 2009-09-01 Agere Systems Inc. Late reverberation-based synthesis of auditory scenes
US20030035553A1 (en) * 2001-08-10 2003-02-20 Frank Baumgarte Backwards-compatible perceptual coding of spatial cues
US7116787B2 (en) * 2001-05-04 2006-10-03 Agere Systems Inc. Perceptual synthesis of auditory scenes
US7206740B2 (en) * 2002-01-04 2007-04-17 Broadcom Corporation Efficient excitation quantization in noise feedback coding with general noise shaping
US6751587B2 (en) * 2002-01-04 2004-06-15 Broadcom Corporation Efficient excitation quantization in noise feedback coding with general noise shaping
CN100508026C (en) * 2002-04-10 2009-07-01 皇家飞利浦电子股份有限公司 Coding of stereo signals
DE60331535D1 (en) * 2002-04-10 2010-04-15 Koninkl Philips Electronics Nv Coding and decoding for multi-channel signals
US7805313B2 (en) * 2004-03-04 2010-09-28 Agere Systems Inc. Frequency-based coding of channels in parametric multi-channel coding systems
EP1757165B1 (en) * 2004-06-08 2011-12-28 Koninklijke Philips Electronics N.V. Coding reverberant sound signals
GB2416955B (en) * 2004-07-28 2009-03-18 Vodafone Plc Conference calls in mobile networks
JP2008510197A (en) 2004-08-17 2008-04-03 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Scalable audio coding
CN101031960A (en) * 2004-09-30 2007-09-05 松下电器产业株式会社 Scalable encoding device, scalable decoding device, and method thereof
US8204261B2 (en) * 2004-10-20 2012-06-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Diffuse sound shaping for BCC schemes and the like
US7720230B2 (en) * 2004-10-20 2010-05-18 Agere Systems, Inc. Individual channel shaping for BCC schemes and the like
KR101236259B1 (en) 2004-11-30 2013-02-22 에이저 시스템즈 엘엘시 A method and apparatus for encoding audio channel s
US8340306B2 (en) * 2004-11-30 2012-12-25 Agere Systems Llc Parametric coding of spatial audio with object-based side information
US7787631B2 (en) * 2004-11-30 2010-08-31 Agere Systems Inc. Parametric coding of spatial audio with cues based on transmitted channels
US7903824B2 (en) 2005-01-10 2011-03-08 Agere Systems Inc. Compact side information for parametric coding of spatial audio
US7813481B1 (en) 2005-02-18 2010-10-12 At&T Mobility Ii Llc Conversation recording with real-time notification for users of communication terminals
US7852999B2 (en) * 2005-04-27 2010-12-14 Cisco Technology, Inc. Classifying signals at a conference bridge
CN101496387B (en) * 2006-03-06 2012-09-05 思科技术公司 System and method for access authentication in a mobile wireless network
US8570373B2 (en) * 2007-06-08 2013-10-29 Cisco Technology, Inc. Tracking an object utilizing location information associated with a wireless device
US8355041B2 (en) * 2008-02-14 2013-01-15 Cisco Technology, Inc. Telepresence system for 360 degree video conferencing
US8797377B2 (en) * 2008-02-14 2014-08-05 Cisco Technology, Inc. Method and system for videoconference configuration
US8319819B2 (en) * 2008-03-26 2012-11-27 Cisco Technology, Inc. Virtual round-table videoconference
US8390667B2 (en) * 2008-04-15 2013-03-05 Cisco Technology, Inc. Pop-up PIP for people not in picture
US8694658B2 (en) * 2008-09-19 2014-04-08 Cisco Technology, Inc. System and method for enabling communication sessions in a network environment
US8588947B2 (en) * 2008-10-13 2013-11-19 Lg Electronics Inc. Apparatus for processing an audio signal and method thereof
US8477175B2 (en) 2009-03-09 2013-07-02 Cisco Technology, Inc. System and method for providing three dimensional imaging in a network environment
US8659637B2 (en) * 2009-03-09 2014-02-25 Cisco Technology, Inc. System and method for providing three dimensional video conferencing in a network environment
US20100283829A1 (en) * 2009-05-11 2010-11-11 Cisco Technology, Inc. System and method for translating communications between participants in a conferencing environment
WO2010134332A1 (en) * 2009-05-20 2010-11-25 パナソニック株式会社 Encoding device, decoding device, and methods therefor
US8659639B2 (en) 2009-05-29 2014-02-25 Cisco Technology, Inc. System and method for extending communications between participants in a conferencing environment
US9082297B2 (en) * 2009-08-11 2015-07-14 Cisco Technology, Inc. System and method for verifying parameters in an audiovisual environment
US9225916B2 (en) * 2010-03-18 2015-12-29 Cisco Technology, Inc. System and method for enhancing video images in a conferencing environment
USD626102S1 (en) 2010-03-21 2010-10-26 Cisco Tech Inc Video unit with integrated features
USD626103S1 (en) 2010-03-21 2010-10-26 Cisco Technology, Inc. Video unit with integrated features
US9313452B2 (en) 2010-05-17 2016-04-12 Cisco Technology, Inc. System and method for providing retracting optics in a video conferencing environment
US8896655B2 (en) 2010-08-31 2014-11-25 Cisco Technology, Inc. System and method for providing depth adaptive video conferencing
US8599934B2 (en) 2010-09-08 2013-12-03 Cisco Technology, Inc. System and method for skip coding during video conferencing in a network environment
US8599865B2 (en) 2010-10-26 2013-12-03 Cisco Technology, Inc. System and method for provisioning flows in a mobile network environment
US8699457B2 (en) 2010-11-03 2014-04-15 Cisco Technology, Inc. System and method for managing flows in a mobile network environment
US8730297B2 (en) 2010-11-15 2014-05-20 Cisco Technology, Inc. System and method for providing camera functions in a video environment
US9338394B2 (en) 2010-11-15 2016-05-10 Cisco Technology, Inc. System and method for providing enhanced audio in a video environment
US9143725B2 (en) 2010-11-15 2015-09-22 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US8902244B2 (en) 2010-11-15 2014-12-02 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US8542264B2 (en) 2010-11-18 2013-09-24 Cisco Technology, Inc. System and method for managing optics in a video environment
US8723914B2 (en) 2010-11-19 2014-05-13 Cisco Technology, Inc. System and method for providing enhanced video processing in a network environment
US9111138B2 (en) 2010-11-30 2015-08-18 Cisco Technology, Inc. System and method for gesture interface control
USD678894S1 (en) 2010-12-16 2013-03-26 Cisco Technology, Inc. Display screen with graphical user interface
USD682293S1 (en) 2010-12-16 2013-05-14 Cisco Technology, Inc. Display screen with graphical user interface
USD678320S1 (en) 2010-12-16 2013-03-19 Cisco Technology, Inc. Display screen with graphical user interface
USD682854S1 (en) 2010-12-16 2013-05-21 Cisco Technology, Inc. Display screen for graphical user interface
USD682294S1 (en) 2010-12-16 2013-05-14 Cisco Technology, Inc. Display screen with graphical user interface
USD678307S1 (en) 2010-12-16 2013-03-19 Cisco Technology, Inc. Display screen with graphical user interface
USD678308S1 (en) 2010-12-16 2013-03-19 Cisco Technology, Inc. Display screen with graphical user interface
USD682864S1 (en) 2010-12-16 2013-05-21 Cisco Technology, Inc. Display screen with graphical user interface
US8692862B2 (en) 2011-02-28 2014-04-08 Cisco Technology, Inc. System and method for selection of video data in a video conference environment
US8670019B2 (en) 2011-04-28 2014-03-11 Cisco Technology, Inc. System and method for providing enhanced eye gaze in a video conferencing environment
US8786631B1 (en) 2011-04-30 2014-07-22 Cisco Technology, Inc. System and method for transferring transparency information in a video environment
US8934026B2 (en) 2011-05-12 2015-01-13 Cisco Technology, Inc. System and method for video coding in a dynamic environment
US9025779B2 (en) 2011-08-08 2015-05-05 Cisco Technology, Inc. System and method for using endpoints to provide sound monitoring
US8947493B2 (en) 2011-11-16 2015-02-03 Cisco Technology, Inc. System and method for alerting a participant in a video conference
US8682087B2 (en) 2011-12-19 2014-03-25 Cisco Technology, Inc. System and method for depth-guided image filtering in a video conference environment
US9681154B2 (en) 2012-12-06 2017-06-13 Patent Capital Group System and method for depth-guided filtering in a video conference environment
US9843621B2 (en) 2013-05-17 2017-12-12 Cisco Technology, Inc. Calendaring activities based on communication processing
US9565493B2 (en) 2015-04-30 2017-02-07 Shure Acquisition Holdings, Inc. Array microphone system and method of assembling the same
US9554207B2 (en) 2015-04-30 2017-01-24 Shure Acquisition Holdings, Inc. Offset cartridge microphones
US10367948B2 (en) 2017-01-13 2019-07-30 Shure Acquisition Holdings, Inc. Post-mixing acoustic echo cancellation systems and methods
US10043523B1 (en) * 2017-06-16 2018-08-07 Cypress Semiconductor Corporation Advanced packet-based sample audio concealment
WO2019231632A1 (en) 2018-06-01 2019-12-05 Shure Acquisition Holdings, Inc. Pattern-forming microphone array
US11297423B2 (en) 2018-06-15 2022-04-05 Shure Acquisition Holdings, Inc. Endfire linear array microphone
EP3854108A1 (en) 2018-09-20 2021-07-28 Shure Acquisition Holdings, Inc. Adjustable lobe shape for array microphones
US11558693B2 (en) 2019-03-21 2023-01-17 Shure Acquisition Holdings, Inc. Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition and voice activity detection functionality
CN113841421A (en) 2019-03-21 2021-12-24 舒尔获得控股公司 Auto-focus, in-region auto-focus, and auto-configuration of beamforming microphone lobes with suppression
CN113841419A (en) 2019-03-21 2021-12-24 舒尔获得控股公司 Housing and associated design features for ceiling array microphone
WO2020237206A1 (en) 2019-05-23 2020-11-26 Shure Acquisition Holdings, Inc. Steerable speaker array, system, and method for the same
WO2020243471A1 (en) 2019-05-31 2020-12-03 Shure Acquisition Holdings, Inc. Low latency automixer integrated with voice and noise activity detection
JP2022545113A (en) 2019-08-23 2022-10-25 シュアー アクイジッション ホールディングス インコーポレイテッド One-dimensional array microphone with improved directivity
US11552611B2 (en) 2020-02-07 2023-01-10 Shure Acquisition Holdings, Inc. System and method for automatic adjustment of reference gain
USD944776S1 (en) 2020-05-05 2022-03-01 Shure Acquisition Holdings, Inc. Audio device
WO2021243368A2 (en) 2020-05-29 2021-12-02 Shure Acquisition Holdings, Inc. Transducer steering and configuration systems and methods using a local positioning system
JP2024505068A (en) 2021-01-28 2024-02-02 シュアー アクイジッション ホールディングス インコーポレイテッド Hybrid audio beamforming system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1369813A (en) * 1971-02-02 1974-10-09 Nat Res Dev Reproduction of sound
US4095049A (en) * 1976-03-15 1978-06-13 National Research Development Corporation Non-rotationally-symmetric surround-sound encoding system
US4205201A (en) * 1978-01-18 1980-05-27 Kahn Leonard R Method and means for reducing intelligible crosstalk in telephone systems
JPS57176856A (en) * 1981-04-23 1982-10-30 Matsushita Electric Ind Co Ltd Transmitting system of time division digital signal
EP0082905B1 (en) * 1981-12-24 1986-04-09 Deutsche ITT Industries GmbH Circuit for the processing, transmission and acoustic reproduction of digitalized tone frequency signals

Also Published As

Publication number Publication date
CA1268546C (en) 1990-05-01
US4815132A (en) 1989-03-21

Similar Documents

Publication Publication Date Title
CA1268546A (en) Stereophonic voice signal transmission system
US5570372A (en) Multimedia communications with system-dependent adaptive delays
CA2115610C (en) Stereo voice transmission apparatus, echo canceler, and voice input/output apparatus to which this echo canceler is applied
EP0475196B1 (en) Method for processing audio signals in a sub-band coding system
JP2703405B2 (en) Polyphonic coding
CN1249869A (en) Method and apparatus for using state determination to control functional elements
JP3400064B2 (en) Speech codec, speech coder, and speech decoder
US6078882A (en) Method and apparatus for extracting speech spurts from voice and reproducing voice from extracted speech spurts
JPH10340097A (en) Comfortable noise generator, voice encoder including its component and decoder
JPH0761043B2 (en) Stereo audio transmission storage method
JP3207281B2 (en) Stereo speech encoding / decoding system, stereo speech decoding device, and single speech / multiple simultaneous speech discrimination device
JPH0758939B2 (en) Stereo signal transmission method, encoding device and decoding device
JPS6251844A (en) Stereo sound transmission system
JP3441112B2 (en) Multipoint communication controller
Minami et al. Stereophonic adpcm voice coding method
JPH05347594A (en) Transmission code error compensating device
JPH04324727A (en) Stereo coding transmission system
JP2774518B2 (en) Stereo audio transmission system
JPH06252863A (en) Stereo audio signal transmitter
JPH1013556A (en) Video conference system
JP3222226B2 (en) Decryption device
JPH07123028A (en) Voice conference equipment
JPH0771276B2 (en) Electronic conference system
JP2001249691A (en) Voice encoding device and voice decoding device
JPS6384213A (en) Streo coding and decoding device

Legal Events

Date Code Title Description
MKEX Expiry
MKEX Expiry

Effective date: 20070501