Recherche Images Maps Play YouTube Actualités Gmail Drive Plus »
Connexion
Les utilisateurs de lecteurs d'écran peuvent cliquer sur ce lien pour activer le mode d'accessibilité. Celui-ci propose les mêmes fonctionnalités principales, mais il est optimisé pour votre lecteur d'écran.

Brevets

  1. Recherche avancée dans les brevets
Numéro de publicationUS6304648 B1
Type de publicationOctroi
Numéro de demandeUS 09/217,503
Date de publication16 oct. 2001
Date de dépôt21 déc. 1998
Date de priorité21 déc. 1998
État de paiement des fraisPayé
Numéro de publication09217503, 217503, US 6304648 B1, US 6304648B1, US-B1-6304648, US6304648 B1, US6304648B1
InventeursYoung-fu Chang
Cessionnaire d'origineLucent Technologies Inc.
Exporter la citationBiBTeX, EndNote, RefMan
Liens externes: USPTO, Cession USPTO, Espacenet
Multimedia conference call participant identification system and method
US 6304648 B1
Résumé
A telecommunication system for establishing a conference call between various participants to the call communicating via multimedia communication devices. A call participant identification system detects the presence of voice signals made from a speaking call participant during the conference call. Call participant identification information is associated with the voice signals generated by the speaking call participant. A visual indication identifying the call participant is established at other multimedia communication devices connected with the call to identify the speaking call participant to the other participants of the conference call upon the detection of the voice signals from the speaking call participant.
Images(5)
Previous page
Next page
Revendications(16)
What is claimed is:
1. A method for identifying a call participant to a conference call having a plurality of call participants communicating via a telecommunication system, comprising the steps of:
detecting a presence of voice signals during the conference call; associating call participant identification information with the voice signals in response to the detection of the voice signals;
preselecting a threshold level for silence suppression;
measuring a volume of voice signals received at a communication device associated with the call participant generating the voice signals; and determining if the volume of voice signals exceeds the threshold level for silence suppression.
2. The method of claim 1 including the steps of storing call participant identification information associated with at least one call participant to the conference call, and
coordinating the call participant identification information with the voice signals generated from the call participant.
3. The method of claim 2 including the step of enabling a visual indication identifying the call participant to be established at the communication device to the conference call in response to the detection of voice signals from the call participant.
4. The method of claim 3 in which the communication device is a multimedia communication device associated with another call participant to the conference call different from the call participant generating the detected voice signals.
5. The method of claim 4 including the step of coupling together a plurality of multimedia communication devices to the conference call in which the multimedia communication devices have a visual display for displaying call participant identification information.
6. The method of claim 5 in which the call participant identification information includes at least one of:
a) video of the call participant,
b) photographic image of the call participant, and
c) data identifying the call participant.
7. The method of claim 4 including the step of determining if the multimedia communication device is an integrated multimedia communication device capable of performing mixing of voice signals with associated data and video signals.
8. The method of claim 1 including the steps of providing a visual suppression of call participant identification information at a different multimedia communication device other than the communication device receiving the voice signals to identify at the different communication device the call participant generating the voice signals in response to the volume of voice signals exceeding the threshold for silence suppression.
9. The method of claim 8 including the step of removing the visual indication of call participant identification information at the different multimedia communication device in response to a determination that the volume of voice signals measured does not exceed the threshold level for silence suppression.
10. In a telecommunication system for establishing a conference call between a plurality of call participants communicating via multimedia communication devices, the improvement being a call participant identification system comprising:
means for detecting a presence of voice signals received at a multimedia communication device during the conference call;
a storage device for storing call participant identification information corresponding to a particular call participant of the conference call;
means responsive to the detection of voice signals during the conference call for associating the call participant identification information of the call participant generating the voice signals with the detected voice signal;
a multipoint control unit that measures a volume of voice signals received at the multimedia communication device; and
means for determining if the volume of voice signals exceeds a preselected threshold level for silence suppression.
11. The telecommunication system of claim 10 including means for enabling a visual indication identifying the call participant generating the voice signals to be established at another multimedia communication device to the conference call.
12. The telecommunication system of claim 11 in which the visual indication includes at least one of:
a) video of the call participant;
b) photographic image of the call participant,
c) data identifying the call participant.
13. The telecommunication system of claim 11 including a conference coordination system for coordinating the call participant identification information with the voice signals generated from the call participant.
14. The telecommunication system of claim 13 including a multipoint control unit coupled with the multimedia communication devices for mixing streams of voice signals from the plurality of multimedia communication devices.
15. The telecommunication system of claim 11 in which said visual indication at the other multimedia communication device identifying the call participant generating the voice signals is established in response to the volume of voice signals exceeding the preselected threshold level.
16. The telecommunication system of claim 15 including means for removing the visual indication identifying the call participant in response to the volume of measured voice signals failing to exceed the preselected threshold level.
Description
BACKGROUND OF THE INVENTION

This invention relates generally to telecommunications networks and, more particularly, to multimedia communications networks for providing multimedia service and the like including voice, video, image and/or data.

It is known in certain telecommunications systems to employ conferencing capabilities such that more than two callers or participants to a call may communicate with each other for the duration of the call. Additionally, certain multimedia telecommunications conference systems attempt to simulate face to face meetings of the call participants. Such conference systems permit separate meeting participants to communicate with one another in multiple media such as voice, video, image and/or data from their own calling location without requiring that they convene in the same place.

During a conference call in known multimedia telecommunications conference calling systems, it is generally problematic for participants of the call to follow the pace of the call especially in situations involving multi-site participants. It has been found that often times it is difficult and confusing for the participant to determine who is actually speaking during the conference call. Often this is the case when the individual participants to the call are not familiar with one another, e.g. a newly formed team for a company project. Therefore, there is a need in the art for coordinating the activities of the participants to a conference call including in the multimedia environment.

SUMMARY OF THE INVENTION

The problems noted above are solved in accordance with the present invention which provides a conference coordination system to coordinate the activities of a participant to a conference call. The inventive system and method further provides coordination of image, data and/or video of a speaker with the voice of the speaker to enable conference participants to relate to the voice with identification of the speaking participant.

In accordance with the present invention a method and system of identifying a call participant to a conference call having a plurality of call participants communicating via a telecommunication system is performed by detecting a presence of voice signals during the conference call and associating call participant identification information with the voice signals in response to the detection of the voice signals. A visual indication identifying a speaking call participant is established at at least one communication device to the conference call when the identified call participant speaks.

BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing advantageous features of the invention will be explained in greater detail and others will be made apparent from the detailed description of the present invention which is given with reference to the several figures of the drawing, in which:

FIG. 1 is an illustrative diagram of the telecommunication system for identifying conference call speaking participants which the present invention may be practiced;

FIG. 2 is an illustrative functional block diagram of a multipoint conference unit;

FIGS. 3A and 3B are flow charts illustrating the steps performed for coordinating call participant information in a multimedia conference call; and

FIG. 4 is a graphical terminal, screen representation which identifies the speaking call participant in a multimedia conference call.

DETAILED DESCRIPTION

Referring to FIG. 1, telecommunication system 10 for establishing telephone conference calls between communication devices 12 at which call participants 14 interact with during a conference call is shown. It will be appreciated that the telecommunication system 10 may include a communications network (not shown) comprised of local or long distance telephone networks, or both, for the establishment of the telephone calls. During the conference call the call participant identification system 16 associates stored call participant identification information related to the speaking call participant 14A with voice signals which are detected as being received at the communication device 12A for the particular speaking call participant generating the voice signals. The call participant identification system 16 detects the presence of voice signals which are generated by a speaking call participant 14A and are received at the telephonic communication device 12A of the speaking participant. In response to the detection of these voice signals, the call participant identification system 16 associates stored call participant identification information for the speaking call participant 14A with the detected voice signals. The call participant identification information is broadly defined and may selectively be established in various forms such as coded information assigned to or associated with a particular call participant, text data representing the name of a call participant, or in the form of a media such as video or a digitized photographic image of a call participant.

A visual indication identifying the particular speaking call participant 14A is established at display devices 18B, 18C of the other communication devices 12B, 12C to the conference call in order to inform the other participants 14B, 14C to the call who the actual speaker is once that participant 14A speaks. The visual indication displayed at the display devices which are preferably terminal screens 18B, 18C of the communication devices 12B, 12C are preferably in the form of either: a real time video display of the speaking call participant, a photographic image of the call participant or text data identifying the call participant (such as highlighting the name of the speaking participant once the participant speaks). In particular, in response to the detected volume of voice signals being received from the speaking participant 14A at the associated communication device 12A, the visual indication identifying the speaking call participant is provided at the different communication devices 12B, 12C other than the one communication device 12A receiving the voice signals. The visual indication identifies the speaking participant to all the other participants to the conference call.

Preferably, the communication device 12 such as communication devices 12A, 12B, 12C of FIG. 1 are multimedia communication devices which are enabled to receive and transmit voice and data during a conference call. The multimedia communication devices 12A, 12B, 12C of FIG. 1 each have respective display devices or terminal screens 18A, 18B, 18C to display visual indications (preferably in the form of video or data displays) identifying the speaking call participant when he or she is speaking during the call. Many variations of multimedia communication devices capable of communicating telephonic voice signals, video and/or data information may selectively be employed. Examples of multimedia communication devices may include but are not limited to, personal computer with built in microphone for audio, workstation including an attached video camera and telephone (analog or ISDN telephone), personal computer with an integrated video camera and telephone and the like. For further details on the use and operation of multimedia communication devices and the operation of graphical multimedia communications for a conference call, reference can be made to U.S. Pat. No. 5,627,978 issued May 6, 1997 to Altom et al. entitled “Graphical User Interface for Multimedia Call Set-Up and Call Handling in a Virtual Conference on a Desktop Computer Conferencing System” which is hereby incorporated by reference in its entirety. Preferably, the multimedia communication devices or terminals employed are those which follow H.323 ITU standards for communication.

As seen in FIG. 1, the communication devices are coupled to a multipoint control unit (MCU) 22 for transmission and receipt of voice signals during the conference call. The MCU 22 has the capability of mixing, switching, and bridging voice/video/data. The multipoint control unit 22 is a bridging or switching device used in support of multipoint videoconferencing to support many conferencing locations. MCU 22 may selectively be in the form of customer premises equipment or embedded in a wide area network in support of carrier-based videoconferencing. As discussed above the presence of voice signals received from a speaking participant at a communication device 12A are detected. It will be appreciated that in a centralized call arrangement (i.e. the voice signals of all participants are mixed by the MCU 22) the digital signal processor (not shown) in the MCU 22 will detect the level of voice signals. Alternatively in a decentralized conference call each communication device 12 mixes the voice signals from all the participants in which a digital signal processor or central processor unit (not shown) at each communication device will detect voice signals.

A web server 24 in also seen coupled with the communication devices 12A, 12B, 12C for the receipt and transmission of data information. Conference coordination system (CCS) 26 is preferably provided at the web server 24 for coordinating the call participant identification information with the voice signals generated by the speaking call participant. The MCU 22 and CCS 26 at the web server 24 are also coupled in order to enable transmission of the visual information identifying the speaker to be allocated to appropriate other call participants during the conference call. Alternatively, the conference coordination system (CCS) 26 may selectively be implemented at the MCU 22. The CCS 26 is preferably programmed code implemented at a computer controlled device for coordinating the identification information of a call participant with the voice energy of the speaking call participant with the operation of the CCS discussed in further detail in FIGS. 3A-3B. The CCS 26 supports the required functions such as image storage and communications to achieve the coordination between voice activities and video/image/data. The CCS 26 is implemented at the web server 24 to support the communications with the communication devices 12A, 12B, 12C.

Referring to FIG. 4, a graphical screen representation at a display device 18 identifying the speaking call participant is shown. A visual indication 20 is provided at the display device 18 to inform the conference call participants who the speaking participant is when the identified call participant is speaking. The visual indication 20 may selectively be provided in many various forms. Data such as the name of the speaking participant 21 may selectively appear on the terminal screen 18 of the non-speaking participants or all the participants when one of the conference call participants is speaking. A video display or a photographic image 23 of the speaking call participant may selectively appear on the terminal screen 18 for the non-speaking conference call participants or all the call participants to identify the speaking participant during the conference call. Alternatively, the text data 21, video image or photographic image 23 associated with the speaking call participant which appears on the display device 18 may be illuminated or highlighted when the speaking call participant speaks. In the example seen in FIG. 4, the visual indication 20 of Bob Jones (either the text name 21, image 23 or both) is illuminated at communication devices 12B, 12C, FIG. 1, to inform the other participants 14B, 14C that Bob Jones at communication device 12A is the speaking call participant.

Referring again to FIG. 1, upon receipt of voice signals being received at a multimedia communications device 12A, and in turn the MCU 22, the volume of received voice signals are measured to determine if the volume exceeds a preselected threshold level. The preselected threshold level is preferably a level set for silence suppression. In the case of regular pulse code modulation (PCM) connections to the MCU 22, the multipoint processor 30, FIG. 2, determines if the volume of voice signals exceeds the preselected threshold level for silence suppression. The silence suppression level can be set by provisioning the MCU 22 or by channel-by-channel control at the MCU. In the case of packetized voice connection, the voice encoded by the terminal 12A-12C may selectively be provided with a silence indicator. For packetized voice situations, a digital signal processor preferably associated with either the computer controlled multimedia communication device 12A or the MCU 22 is enabled to detect the level of voice received and determine if the volume of voice meets the preselected level.

In response to the volume of voice signals received (from the speaking participant) at the communication device 12A exceeding the level set for silence suppression, the visual indication 20, FIG. 4, identifying the speaking call participant is established at the other communication devices 12B, 12C to the conference call. This provides the benefit of informing the other participants to the call who the speaker is when the identified participant is speaking during the conference call. If the measured voice signals fail to exceed the preselected threshold level, then the visual indication 20, FIG. 4, associated with the call participant is removed from the display devices 18B, 18C, FIG. 1, of the other conference call participants (at multimedia communication devices 12B, 12C). If the multimedia communication devices 12B, 12C have the capacity to perform the processing of the voice signals received, then the communication devices 12B, 12C themselves preferably detect the volume and perform highlighting and removal of the video channel. If the multimedia communication devices 12A, 12B, 12C do not have the capabilities to process the voice signals, then the MCU 22 alone or alternatively in conjunction with the CCS 26 highlights and removes the visual indication.

Referring now to FIG. 2, the multipoint conference unit 22 is shown having multipoint controller 28 and multipoint processor 30. It will be appreciated that the multipoint control unit 22 allocates streams of video signals and voice signals between the communication devices associated with the conference call. The MCU 22 acts as a server for a conference call and further is a centralized resource acting as a mixer device for voice and video signals. For example, in a conference call which includes multimedia communication devices 12A, 12B and 12C, FIG. 1, the MCU 22 will combine the voice and video streams from devices 12B and 12C and send them to device 12A. The MCU 22 allocates voice from communication devices 12A and 12C to device 12B and so forth. The multipoint controller 28 controls the signaling and communication handshaking between the multimedia communication devices 12A, 12B, 12C participating in a conference call. The multipoint processor 30 controls the mixing of voice and video streams to the conferencing multimedia communication units. The multipoint processor (MP) 30 is an H.323 entity on a packet data network which provides for the centralized processing of audio, video, and/or data streams in a multipoint conference. The MP 30 provides for the mixing, switching, or other processing of media streams under the control of the multipoint controller 28. The MP 30 may process a simple media stream or multiple media streams depending on the type of conference supported.

Referring now to FIGS. 3A-3B, the steps which are performed for coordinating speaker information for a multimedia conference call are shown. In step 100, FIG. 3A, a party participant 14A desires to join a conference call. In step 101, FIG. 3A, a determination is made to see if the communication device or terminal 12A at which the participant is stationed is an integrated multimedia terminal. If the terminal supports H.323, H.320 standard protocol and the like when the terminal is a multimedia terminal. An integrated multimedia communication device or terminal 12 is one which is capable of performing mixing of voice signals with associated data and video signals preferably in accordance with H.320, H.323, H.324, video conferencing standards and the like. If the participant has an integrated multimedia terminal 12A then in step 102, a check is made to determine if the terminal 12A performs decentralized conference.

Standard protocols support the determination of centralized or decentralized conference. Centralized conference refers to all media streams being mixed by a centralized device, such as MCU 22. Decentralized conference refers to individual terminals performing the media mixing function. If the integrated multimedia terminal 12A, FIG. 1, is capable of performing a decentralized conference then in step 104, FIG. 3A, the participant at the terminal 12A joins the conference. As seen in step 104, FIG. 3A, upon joining the conference, voice and video are mixed by the integrated multimedia terminal 12A. The processing then proceeds to step 120 for operation during the conference call. If the integrated multimedia terminal 12A, FIG. 1, does not perform decentralized conferences, then in step 106, FIG. 3A, the call participant at the terminal 12A joins the conference, however, the voice and video are mixed by the multipoint control unit (MCU) 22, FIG. 1, with conference coordination system (CCS) 26 service being part of MCU 22 and implemented at the multipoint controller 28, FIG. 2, and multipoint processor 30. The embodiment of the CCS 26 at web server 24 as seen in FIG. 2 is described in the paragraphs below. The functions of the CCS 26 in the MCU 22, FIG. 1, is similar to those in the multimedia devices 12A-12C of a decentralized conference. The processing then proceeds to step 120, FIG. 3A, for handling during the conference call.

If the participant does not have an integrated multimedia terminal 12A then the processing proceeds to step 108, FIG. 3A, to determine if the participant associated with the identified terminal has access to the conference coordination system (CCS) 26. If the participant does not have access to a conference coordination system 26, FIG. 1, the processing ends at step 110, FIG. 3A. If the participant at the terminal 12A has access to the CCS 26, then in step 112 the participant submits identification information or materials (such as a photograph of their likeness, video, their name, other identification information etc.) to the CCS 26. A storage device (such as a computer memory or other applicable conventional storage means) associated with the CCS stores the call participant identification information corresponding to a call participant. Alternatively, the call participant identification information may selectively be stored at a suitable storage device or memory of the multimedia communication device 12A. If the CCS 26 is integrated at a web server 24, then preferably a prompt is provided to the participant 14A at the terminal 12A asking if the participant plans to join the conference. When the participant confirms the desire to join the conference call, a prompt may selectively be issued requesting the participant to submit certain identification information (i.e. digitally stored photograph, video, participant's name, etc.) to be transmitted to the conference coordination system 26, FIG. 1. In step 114, FIG. 3A, the participant joins the conference with the received voice energy being mixed by the MCU 22, FIG. 1.

In step 116, FIG. 3A, the participant joins the CCS 26 and the identification information (i.e. digitally stored photo image) of the participant is accessed. In step 118, the MCU 22 makes a connection to the CCS 26 in the web server 24. The connection is made so that the MCU 22 can communicate with the CCS 26 when the conference is in progress.

Referring to FIG. 3B, in step 120 the conference call between the participating callers is in progress. The participants speak at their respective multimedia communication devices 12A, 12B, 12C and the number of speakers can be more than one. A test is preferably performed to determine which configuration the system 16 is running. In step 122, a determination is made concerning the ability of the terminals 12A-12C to perform a decentralized conference. The system 16 determines if certain terminals to the conference have the capability of processing received voice signals. If the communication device terminal 12A associated with the participant is enabled to perform a decentralized conference then in step 124, FIG. 3B, the terminal 12A detects when a volume of received voice energy is greater than the threshold set for silence suppression. Upon the detection of volume which exceeds the threshold, the terminal 12A, FIG. 1, highlights the video channel which is associated with the voice channel or indicated in the multimedia protocol service (such as H.323 and like multimedia protocols) as seen in step 124, FIG. 3B. If the terminal 12A detects a volume of voice signals, step 124FIG. 3B, that is lower than the threshold set for silence suppression, then the terminal removes the highlight of the video channel. The highlighting may be implemented such that a banner (or additional banner) is superimposed on the video or a flashing caption is superimposed on the video. Flashing text may be implemented under (or proximate to) the video of the identified speaking participant.

If the terminal does not perform the decentralized conference then in step 126, FIG. 3B, the MCU 22 detects the volume of received voice. If the volume of voice is greater than the threshold set for silence suppression, then the MCU 22 sends an indication of active channel to the CCS 26. If the CCS 26, FIG. 1, is implemented at the MCU 22 then the video of the speaker (such as the photograph display of the speaker) is highlighted by the MCU in accordance with the directives of the CCS. If the CCS 26 is implemented at an Internet web server 24 the photo image or other identification information of the speaker may selectively be highlighted by the CCS 26. As seen in step 126, FIG. 3B, when the MCU 22, FIG. 1, detects a volume of voice that is lower than the threshold set for silence suppression, the MCU 22 will remove the highlight of the information identifying a speaker at the terminal. If the CCS 26 is at the web server 24, then if the measured volume does not meet the required threshold, the MCU 22 sends an “inactive channel” message to the CCS 26.

The processing then proceeds to step 128, FIG. 3B, to determine if any terminal is connected with the conference call. If a connection to the conference call remains then the processing returns to step 120, FIG. 3B, to monitor the conference in progress. The processing exits at step 130 if it is determined that there is no connection to the conference.

While a detailed description of the preferred embodiments of the invention has been given, it shall be appreciated that many variations can be made thereto without departing from the scope of the invention as set forth in the appended claims.

Citations de brevets
Brevet cité Date de dépôt Date de publication Déposant Titre
US5710591 *27 juin 199520 janv. 1998At&TMethod and apparatus for recording and indexing an audio and multimedia conference
US5936662 *20 mars 199610 août 1999Samsung Electronics Co., Ltd.Video conference control system using an integrated services digital network
US6020915 *22 oct. 19961 févr. 2000At&T Corp.Method and system for providing an analog voice-only endpoint with pseudo multimedia service
Référencé par
Brevet citant Date de dépôt Date de publication Déposant Titre
US6687358 *8 mars 20003 févr. 2004Intel CorporationMethod and apparatus for joining a party to a multipoint conference using digital techniques
US6754322 *31 août 199922 juin 2004William Jackson BushnellCall me conference call system
US6760750 *28 sept. 20006 juil. 2004Polycom Israel, Ltd.System and method of monitoring video and/or audio conferencing through a rapid-update web site
US6768792 *17 déc. 200127 juil. 2004International Business Machines CorporationIdentifying call parties to a call to an incoming calling party
US6788946 *12 avr. 20017 sept. 2004Qualcomm IncSystems and methods for delivering information within a group communications system
US683941710 sept. 20024 janv. 2005Myriad Entertainment, Inc.Method and apparatus for improved conference call management
US6850609 *23 oct. 19981 févr. 2005Verizon Services Corp.Methods and apparatus for providing speech recording and speech transcription services
US6876734 *29 févr. 20005 avr. 2005Emeeting.Net, Inc.Internet-enabled conferencing system and method accommodating PSTN and IP traffic
US696141629 févr. 20001 nov. 2005Emeeting.Net, Inc.Internet-enabled conferencing system and method accommodating PSTN and IP traffic
US7043530 *30 mars 20019 mai 2006At&T Corp.System, method and apparatus for communicating via instant messaging
US7047030 *2 mai 200216 mai 2006Symbian LimitedGroup communication method for a wireless communication device
US70620254 avr. 200513 juin 2006Emeeting.Net, Inc.Internet-enabled conferencing system and method accommodating PSTN and IP traffic
US7139379 *19 juin 200321 nov. 2006International Business Machines CorporationMonitoring telephone conferences through a network of computer controlled display terminals, each associated with a telephone station and displaying a user-interactive monitoring page
US7154999 *15 oct. 200326 déc. 2006Lucent Technologies Inc.Sending identification information of a plurality of communication devices that are active on a communication session to information receiving component
US7174365 *8 nov. 20006 févr. 2007Polycom Israel Ltd.System and method for controlling one or more multipoint control units as one multipoint control unit
US7177412 *20 févr. 200213 févr. 2007Berlyoung Danny LMulti-media communication management system with multicast messaging capabilities
US7184531 *5 juin 200327 févr. 2007Siemens Communications, Inc.System and method for authorizing a party to join a conference
US7190388 *22 juin 200113 mars 2007France TelecomCommunication terminal and system
US724615121 mai 200417 juil. 2007At&T Corp.System, method and apparatus for communicating via sound messages and personal sound identifiers
US7280650 *28 août 20029 oct. 2007Intel CorporationMethod and apparatus to manage a conference
US7298834 *22 nov. 200220 nov. 20073Com CorporationSystem and method for large capacity conference calls
US7305078 *18 déc. 20034 déc. 2007Electronic Data Systems CorporationSpeaker identification during telephone conferencing
US73177918 août 20028 janv. 2008International Business Machines CorporationApparatus and method for controlling conference call participants
US7346654 *11 avr. 200018 mars 2008Mitel Networks CorporationVirtual meeting rooms with spatial audio
US7499969 *25 juin 20043 mars 2009Apple Inc.User interface for multiway audio conferencing
US7539290 *13 mai 200526 mai 2009Verizon Services Corp.Facilitation of a conference call
US757447225 sept. 200311 août 2009Polycom, Inc.System and method of monitoring video and/or audio conferencing through a rapid-update website
US761313720 nov. 20033 nov. 2009Insors Integrated CommunicationsData stream communication
US761709416 avr. 200310 nov. 2009Palo Alto Research Center IncorporatedMethods, apparatus, and products for identifying a conversation
US762759910 avr. 20061 déc. 2009Palo Alto Research Center IncorporatedMethod, apparatus, and program product for visualizing tree structured information
US7639633 *13 déc. 200429 déc. 2009Nortel Networks LimitedApparatus and method for setting up a conference call
US769814116 avr. 200313 avr. 2010Palo Alto Research Center IncorporatedMethods, apparatus, and products for automatically managing conversational floors in computer-mediated communications
US780548715 févr. 200628 sept. 2010At&T Intellectual Property Ii, L.P.System, method and apparatus for communicating via instant messaging
US7814150 *3 nov. 200312 oct. 2010Cisco Technology, Inc.Apparatus and method to bridge telephone and data networks
US782260710 avr. 200626 oct. 2010Palo Alto Research Center IncorporatedComputer application environment and communication system employing automatic identification of human conversational behavior
US7924813 *18 nov. 200412 avr. 2011A&T Intellectual Property II, LPSystem, device, and method for providing data to a call participant
US7949116 *8 déc. 200324 mai 2011Insors Integrated CommunicationsPrimary data stream communication
US7995732 *4 oct. 20079 août 2011At&T Intellectual Property I, LpManaging audio in a multi-source audio environment
US800031915 janv. 201016 août 2011Polycom, Inc.Multipoint multimedia/audio conference using IP trunking
US801057530 nov. 200930 août 2011Palo Alto Research Center IncorporatedSystem and method for redistributing interest in a hierarchical data structure representation
US8041800 *8 nov. 200518 oct. 2011International Business Machines CorporationAutomatic orchestration of dynamic multiple party, multiple media communications
US81267059 nov. 200928 févr. 2012Palo Alto Research Center IncorporatedSystem and method for automatically adjusting floor controls for a conversation
US816610217 févr. 200524 avr. 2012Alcatel LucentSignaling method for internet telephony
US8218457 *29 sept. 200410 juil. 2012Stmicroelectronics Asia Pacific Pte. Ltd.Apparatus and method for providing communication services using multiple signaling protocols
US8270585 *4 nov. 200318 sept. 2012Stmicroelectronics, Inc.System and method for an endpoint participating in and managing multipoint audio conferencing in a packet network
US82954628 mars 200823 oct. 2012International Business Machines CorporationAlerting a participant when a topic of interest is being discussed and/or a speaker of interest is speaking during a conference call
US8345082 *8 oct. 20091 janv. 2013Cisco Technology, Inc.System and associated methodology for multi-layered site video conferencing
US8370430 *9 avr. 20025 févr. 2013Siemens Enterprise Communications Gmbh & Co., KgMethod for interchanging messages and information during a telephone conference
US842240618 juil. 200816 avr. 2013Vodafone Group PlcIdentifying callers in telecommunications networks
US846360027 févr. 201211 juin 2013Palo Alto Research Center IncorporatedSystem and method for adjusting floor controls based on conversational characteristics of participants
US8472900 *20 sept. 200625 juin 2013Nokia CorporationMethod and system for enhancing the discontinuous transmission functionality
US8498389 *21 mai 200930 juil. 2013Verizon Services Corp.Facilitation of a conference call
US866025112 juil. 201225 févr. 2014International Business Machines CorporationAlerting a participant when a topic of interest is being discussed and/or a speaker of interest is speaking during a conference call
US867657214 mars 201318 mars 2014Palo Alto Research Center IncorporatedComputer-implemented system and method for enhancing audio to individuals participating in a conversation
US876048712 oct. 201024 juin 2014Cisco Technology, Inc.Apparatus and method to bridge telephone and data networks
US8780765 *13 juin 201115 juil. 2014Tencent Technology (Shenzhen) Company LimitedMethod, system and peer apparatus for implementing multi-channel voice mixing
US8786668 *28 févr. 201222 juil. 2014Lifesize Communications, Inc.Sharing participant information in a videoconference
US88116381 déc. 201119 août 2014Elwha LlcAudible assistance
US884355018 déc. 200623 sept. 2014Polycom Israel Ltd.System and method for controlling one or more multipoint control units as one multipoint control unit
US20050286496 *29 sept. 200429 déc. 2005Stmicroelectronics Asia Pacific Pte. Ltd.Apparatus and method for providing communication services using multiple signaling protocols
US20100171807 *8 oct. 20098 juil. 2010Tandberg Telecom AsSystem and associated methodology for multi-layered site video conferencing
US20100233993 *27 janv. 201016 sept. 2010Qualcomm IncorporatedSystem for collecting billable information in a group communication network
US20110069140 *21 mai 200924 mars 2011Verizon Services Corp.Facilitation of a conference call
US20110246191 *13 juin 20116 oct. 2011Tencent Technology (Shenzhen) Company LimitedMethod, system and peer apparatus for implementing multi-channel voice mixing
US20120176467 *28 févr. 201212 juil. 2012Kenoyer Michael LSharing Participant Information in a Videoconference
US20120249719 *28 mars 20124 oct. 2012Net Power And Light, Inc.Information mixer and system control for attention management
US20120327180 *27 juin 201127 déc. 2012Motorola Mobility, Inc.Apparatus for providing feedback on nonverbal cues of video conference participants
US20130021431 *20 juil. 201224 janv. 2013Net Power And Light, Inc.Information mixer and system control for attention management
US20130144619 *23 janv. 20126 juin 2013Richard T. LordEnhanced voice conferencing
CN100512558C29 sept. 20048 juil. 2009三星电子株式会社Mobile station and a method for controlling the mobile station in conferencing mode of mobile communication system
CN101867768A *31 mai 201020 oct. 2010杭州华三通信技术有限公司Picture control method and device for video conference place
CN101867768B31 mai 20108 févr. 2012杭州华三通信技术有限公司一种视频会议会场画面控制方法及其装置
EP1383272A1 *19 juil. 200221 janv. 2004Web. De AGCommunications environment comprising a telecommunications web site
EP1530352A1 *17 sept. 200411 mai 2005Samsung Electronics Co., Ltd.A mobile station and a method for controlling the mobile station in conferencing mode for use in mobile communication system
WO2004014054A1 *25 juil. 200312 févr. 2004Collabo Technology IncMethod and apparatus for identifying a speaker in a conferencing system
WO2004025941A2 *10 sept. 200325 mars 2004Brian Elan LeeConference call management with speaker designation_____
WO2009030128A1 *26 juin 200812 mars 2009Jiangping FengA method and media server of obtaining the present active speaker in conference
Classifications
Classification aux États-Unis379/202.01, 348/14.08, 379/204.01
Classification internationaleH04M7/12, H04M7/00, H04M3/56
Classification coopérativeH04M3/567, H04M7/12, H04M3/569, H04M7/006, H04M2201/40, H04M2201/42
Classification européenneH04M3/56P2, H04M3/56M
Événements juridiques
DateCodeÉvénementDescription
14 mars 2013FPAYFee payment
Year of fee payment: 12
9 avr. 2009FPAYFee payment
Year of fee payment: 8
23 mars 2005FPAYFee payment
Year of fee payment: 4
21 déc. 1998ASAssignment
Owner name: LUCENT TECHNOLOGIES INC., NEW JERSEY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHANG, YOUNG-FU;REEL/FRAME:009673/0699
Effective date: 19981218