|Numéro de publication||US6304648 B1|
|Type de publication||Octroi|
|Numéro de demande||US 09/217,503|
|Date de publication||16 oct. 2001|
|Date de dépôt||21 déc. 1998|
|Date de priorité||21 déc. 1998|
|État de paiement des frais||Payé|
|Numéro de publication||09217503, 217503, US 6304648 B1, US 6304648B1, US-B1-6304648, US6304648 B1, US6304648B1|
|Cessionnaire d'origine||Lucent Technologies Inc.|
|Exporter la citation||BiBTeX, EndNote, RefMan|
|Citations de brevets (3), Référencé par (116), Classifications (14), Événements juridiques (4)|
|Liens externes: USPTO, Cession USPTO, Espacenet|
This invention relates generally to telecommunications networks and, more particularly, to multimedia communications networks for providing multimedia service and the like including voice, video, image and/or data.
It is known in certain telecommunications systems to employ conferencing capabilities such that more than two callers or participants to a call may communicate with each other for the duration of the call. Additionally, certain multimedia telecommunications conference systems attempt to simulate face to face meetings of the call participants. Such conference systems permit separate meeting participants to communicate with one another in multiple media such as voice, video, image and/or data from their own calling location without requiring that they convene in the same place.
During a conference call in known multimedia telecommunications conference calling systems, it is generally problematic for participants of the call to follow the pace of the call especially in situations involving multi-site participants. It has been found that often times it is difficult and confusing for the participant to determine who is actually speaking during the conference call. Often this is the case when the individual participants to the call are not familiar with one another, e.g. a newly formed team for a company project. Therefore, there is a need in the art for coordinating the activities of the participants to a conference call including in the multimedia environment.
The problems noted above are solved in accordance with the present invention which provides a conference coordination system to coordinate the activities of a participant to a conference call. The inventive system and method further provides coordination of image, data and/or video of a speaker with the voice of the speaker to enable conference participants to relate to the voice with identification of the speaking participant.
In accordance with the present invention a method and system of identifying a call participant to a conference call having a plurality of call participants communicating via a telecommunication system is performed by detecting a presence of voice signals during the conference call and associating call participant identification information with the voice signals in response to the detection of the voice signals. A visual indication identifying a speaking call participant is established at at least one communication device to the conference call when the identified call participant speaks.
The foregoing advantageous features of the invention will be explained in greater detail and others will be made apparent from the detailed description of the present invention which is given with reference to the several figures of the drawing, in which:
FIG. 1 is an illustrative diagram of the telecommunication system for identifying conference call speaking participants which the present invention may be practiced;
FIG. 2 is an illustrative functional block diagram of a multipoint conference unit;
FIGS. 3A and 3B are flow charts illustrating the steps performed for coordinating call participant information in a multimedia conference call; and
FIG. 4 is a graphical terminal, screen representation which identifies the speaking call participant in a multimedia conference call.
Referring to FIG. 1, telecommunication system 10 for establishing telephone conference calls between communication devices 12 at which call participants 14 interact with during a conference call is shown. It will be appreciated that the telecommunication system 10 may include a communications network (not shown) comprised of local or long distance telephone networks, or both, for the establishment of the telephone calls. During the conference call the call participant identification system 16 associates stored call participant identification information related to the speaking call participant 14A with voice signals which are detected as being received at the communication device 12A for the particular speaking call participant generating the voice signals. The call participant identification system 16 detects the presence of voice signals which are generated by a speaking call participant 14A and are received at the telephonic communication device 12A of the speaking participant. In response to the detection of these voice signals, the call participant identification system 16 associates stored call participant identification information for the speaking call participant 14A with the detected voice signals. The call participant identification information is broadly defined and may selectively be established in various forms such as coded information assigned to or associated with a particular call participant, text data representing the name of a call participant, or in the form of a media such as video or a digitized photographic image of a call participant.
A visual indication identifying the particular speaking call participant 14A is established at display devices 18B, 18C of the other communication devices 12B, 12C to the conference call in order to inform the other participants 14B, 14C to the call who the actual speaker is once that participant 14A speaks. The visual indication displayed at the display devices which are preferably terminal screens 18B, 18C of the communication devices 12B, 12C are preferably in the form of either: a real time video display of the speaking call participant, a photographic image of the call participant or text data identifying the call participant (such as highlighting the name of the speaking participant once the participant speaks). In particular, in response to the detected volume of voice signals being received from the speaking participant 14A at the associated communication device 12A, the visual indication identifying the speaking call participant is provided at the different communication devices 12B, 12C other than the one communication device 12A receiving the voice signals. The visual indication identifies the speaking participant to all the other participants to the conference call.
Preferably, the communication device 12 such as communication devices 12A, 12B, 12C of FIG. 1 are multimedia communication devices which are enabled to receive and transmit voice and data during a conference call. The multimedia communication devices 12A, 12B, 12C of FIG. 1 each have respective display devices or terminal screens 18A, 18B, 18C to display visual indications (preferably in the form of video or data displays) identifying the speaking call participant when he or she is speaking during the call. Many variations of multimedia communication devices capable of communicating telephonic voice signals, video and/or data information may selectively be employed. Examples of multimedia communication devices may include but are not limited to, personal computer with built in microphone for audio, workstation including an attached video camera and telephone (analog or ISDN telephone), personal computer with an integrated video camera and telephone and the like. For further details on the use and operation of multimedia communication devices and the operation of graphical multimedia communications for a conference call, reference can be made to U.S. Pat. No. 5,627,978 issued May 6, 1997 to Altom et al. entitled “Graphical User Interface for Multimedia Call Set-Up and Call Handling in a Virtual Conference on a Desktop Computer Conferencing System” which is hereby incorporated by reference in its entirety. Preferably, the multimedia communication devices or terminals employed are those which follow H.323 ITU standards for communication.
As seen in FIG. 1, the communication devices are coupled to a multipoint control unit (MCU) 22 for transmission and receipt of voice signals during the conference call. The MCU 22 has the capability of mixing, switching, and bridging voice/video/data. The multipoint control unit 22 is a bridging or switching device used in support of multipoint videoconferencing to support many conferencing locations. MCU 22 may selectively be in the form of customer premises equipment or embedded in a wide area network in support of carrier-based videoconferencing. As discussed above the presence of voice signals received from a speaking participant at a communication device 12A are detected. It will be appreciated that in a centralized call arrangement (i.e. the voice signals of all participants are mixed by the MCU 22) the digital signal processor (not shown) in the MCU 22 will detect the level of voice signals. Alternatively in a decentralized conference call each communication device 12 mixes the voice signals from all the participants in which a digital signal processor or central processor unit (not shown) at each communication device will detect voice signals.
A web server 24 in also seen coupled with the communication devices 12A, 12B, 12C for the receipt and transmission of data information. Conference coordination system (CCS) 26 is preferably provided at the web server 24 for coordinating the call participant identification information with the voice signals generated by the speaking call participant. The MCU 22 and CCS 26 at the web server 24 are also coupled in order to enable transmission of the visual information identifying the speaker to be allocated to appropriate other call participants during the conference call. Alternatively, the conference coordination system (CCS) 26 may selectively be implemented at the MCU 22. The CCS 26 is preferably programmed code implemented at a computer controlled device for coordinating the identification information of a call participant with the voice energy of the speaking call participant with the operation of the CCS discussed in further detail in FIGS. 3A-3B. The CCS 26 supports the required functions such as image storage and communications to achieve the coordination between voice activities and video/image/data. The CCS 26 is implemented at the web server 24 to support the communications with the communication devices 12A, 12B, 12C.
Referring to FIG. 4, a graphical screen representation at a display device 18 identifying the speaking call participant is shown. A visual indication 20 is provided at the display device 18 to inform the conference call participants who the speaking participant is when the identified call participant is speaking. The visual indication 20 may selectively be provided in many various forms. Data such as the name of the speaking participant 21 may selectively appear on the terminal screen 18 of the non-speaking participants or all the participants when one of the conference call participants is speaking. A video display or a photographic image 23 of the speaking call participant may selectively appear on the terminal screen 18 for the non-speaking conference call participants or all the call participants to identify the speaking participant during the conference call. Alternatively, the text data 21, video image or photographic image 23 associated with the speaking call participant which appears on the display device 18 may be illuminated or highlighted when the speaking call participant speaks. In the example seen in FIG. 4, the visual indication 20 of Bob Jones (either the text name 21, image 23 or both) is illuminated at communication devices 12B, 12C, FIG. 1, to inform the other participants 14B, 14C that Bob Jones at communication device 12A is the speaking call participant.
Referring again to FIG. 1, upon receipt of voice signals being received at a multimedia communications device 12A, and in turn the MCU 22, the volume of received voice signals are measured to determine if the volume exceeds a preselected threshold level. The preselected threshold level is preferably a level set for silence suppression. In the case of regular pulse code modulation (PCM) connections to the MCU 22, the multipoint processor 30, FIG. 2, determines if the volume of voice signals exceeds the preselected threshold level for silence suppression. The silence suppression level can be set by provisioning the MCU 22 or by channel-by-channel control at the MCU. In the case of packetized voice connection, the voice encoded by the terminal 12A-12C may selectively be provided with a silence indicator. For packetized voice situations, a digital signal processor preferably associated with either the computer controlled multimedia communication device 12A or the MCU 22 is enabled to detect the level of voice received and determine if the volume of voice meets the preselected level.
In response to the volume of voice signals received (from the speaking participant) at the communication device 12A exceeding the level set for silence suppression, the visual indication 20, FIG. 4, identifying the speaking call participant is established at the other communication devices 12B, 12C to the conference call. This provides the benefit of informing the other participants to the call who the speaker is when the identified participant is speaking during the conference call. If the measured voice signals fail to exceed the preselected threshold level, then the visual indication 20, FIG. 4, associated with the call participant is removed from the display devices 18B, 18C, FIG. 1, of the other conference call participants (at multimedia communication devices 12B, 12C). If the multimedia communication devices 12B, 12C have the capacity to perform the processing of the voice signals received, then the communication devices 12B, 12C themselves preferably detect the volume and perform highlighting and removal of the video channel. If the multimedia communication devices 12A, 12B, 12C do not have the capabilities to process the voice signals, then the MCU 22 alone or alternatively in conjunction with the CCS 26 highlights and removes the visual indication.
Referring now to FIG. 2, the multipoint conference unit 22 is shown having multipoint controller 28 and multipoint processor 30. It will be appreciated that the multipoint control unit 22 allocates streams of video signals and voice signals between the communication devices associated with the conference call. The MCU 22 acts as a server for a conference call and further is a centralized resource acting as a mixer device for voice and video signals. For example, in a conference call which includes multimedia communication devices 12A, 12B and 12C, FIG. 1, the MCU 22 will combine the voice and video streams from devices 12B and 12C and send them to device 12A. The MCU 22 allocates voice from communication devices 12A and 12C to device 12B and so forth. The multipoint controller 28 controls the signaling and communication handshaking between the multimedia communication devices 12A, 12B, 12C participating in a conference call. The multipoint processor 30 controls the mixing of voice and video streams to the conferencing multimedia communication units. The multipoint processor (MP) 30 is an H.323 entity on a packet data network which provides for the centralized processing of audio, video, and/or data streams in a multipoint conference. The MP 30 provides for the mixing, switching, or other processing of media streams under the control of the multipoint controller 28. The MP 30 may process a simple media stream or multiple media streams depending on the type of conference supported.
Referring now to FIGS. 3A-3B, the steps which are performed for coordinating speaker information for a multimedia conference call are shown. In step 100, FIG. 3A, a party participant 14A desires to join a conference call. In step 101, FIG. 3A, a determination is made to see if the communication device or terminal 12A at which the participant is stationed is an integrated multimedia terminal. If the terminal supports H.323, H.320 standard protocol and the like when the terminal is a multimedia terminal. An integrated multimedia communication device or terminal 12 is one which is capable of performing mixing of voice signals with associated data and video signals preferably in accordance with H.320, H.323, H.324, video conferencing standards and the like. If the participant has an integrated multimedia terminal 12A then in step 102, a check is made to determine if the terminal 12A performs decentralized conference.
Standard protocols support the determination of centralized or decentralized conference. Centralized conference refers to all media streams being mixed by a centralized device, such as MCU 22. Decentralized conference refers to individual terminals performing the media mixing function. If the integrated multimedia terminal 12A, FIG. 1, is capable of performing a decentralized conference then in step 104, FIG. 3A, the participant at the terminal 12A joins the conference. As seen in step 104, FIG. 3A, upon joining the conference, voice and video are mixed by the integrated multimedia terminal 12A. The processing then proceeds to step 120 for operation during the conference call. If the integrated multimedia terminal 12A, FIG. 1, does not perform decentralized conferences, then in step 106, FIG. 3A, the call participant at the terminal 12A joins the conference, however, the voice and video are mixed by the multipoint control unit (MCU) 22, FIG. 1, with conference coordination system (CCS) 26 service being part of MCU 22 and implemented at the multipoint controller 28, FIG. 2, and multipoint processor 30. The embodiment of the CCS 26 at web server 24 as seen in FIG. 2 is described in the paragraphs below. The functions of the CCS 26 in the MCU 22, FIG. 1, is similar to those in the multimedia devices 12A-12C of a decentralized conference. The processing then proceeds to step 120, FIG. 3A, for handling during the conference call.
If the participant does not have an integrated multimedia terminal 12A then the processing proceeds to step 108, FIG. 3A, to determine if the participant associated with the identified terminal has access to the conference coordination system (CCS) 26. If the participant does not have access to a conference coordination system 26, FIG. 1, the processing ends at step 110, FIG. 3A. If the participant at the terminal 12A has access to the CCS 26, then in step 112 the participant submits identification information or materials (such as a photograph of their likeness, video, their name, other identification information etc.) to the CCS 26. A storage device (such as a computer memory or other applicable conventional storage means) associated with the CCS stores the call participant identification information corresponding to a call participant. Alternatively, the call participant identification information may selectively be stored at a suitable storage device or memory of the multimedia communication device 12A. If the CCS 26 is integrated at a web server 24, then preferably a prompt is provided to the participant 14A at the terminal 12A asking if the participant plans to join the conference. When the participant confirms the desire to join the conference call, a prompt may selectively be issued requesting the participant to submit certain identification information (i.e. digitally stored photograph, video, participant's name, etc.) to be transmitted to the conference coordination system 26, FIG. 1. In step 114, FIG. 3A, the participant joins the conference with the received voice energy being mixed by the MCU 22, FIG. 1.
In step 116, FIG. 3A, the participant joins the CCS 26 and the identification information (i.e. digitally stored photo image) of the participant is accessed. In step 118, the MCU 22 makes a connection to the CCS 26 in the web server 24. The connection is made so that the MCU 22 can communicate with the CCS 26 when the conference is in progress.
Referring to FIG. 3B, in step 120 the conference call between the participating callers is in progress. The participants speak at their respective multimedia communication devices 12A, 12B, 12C and the number of speakers can be more than one. A test is preferably performed to determine which configuration the system 16 is running. In step 122, a determination is made concerning the ability of the terminals 12A-12C to perform a decentralized conference. The system 16 determines if certain terminals to the conference have the capability of processing received voice signals. If the communication device terminal 12A associated with the participant is enabled to perform a decentralized conference then in step 124, FIG. 3B, the terminal 12A detects when a volume of received voice energy is greater than the threshold set for silence suppression. Upon the detection of volume which exceeds the threshold, the terminal 12A, FIG. 1, highlights the video channel which is associated with the voice channel or indicated in the multimedia protocol service (such as H.323 and like multimedia protocols) as seen in step 124, FIG. 3B. If the terminal 12A detects a volume of voice signals, step 124FIG. 3B, that is lower than the threshold set for silence suppression, then the terminal removes the highlight of the video channel. The highlighting may be implemented such that a banner (or additional banner) is superimposed on the video or a flashing caption is superimposed on the video. Flashing text may be implemented under (or proximate to) the video of the identified speaking participant.
If the terminal does not perform the decentralized conference then in step 126, FIG. 3B, the MCU 22 detects the volume of received voice. If the volume of voice is greater than the threshold set for silence suppression, then the MCU 22 sends an indication of active channel to the CCS 26. If the CCS 26, FIG. 1, is implemented at the MCU 22 then the video of the speaker (such as the photograph display of the speaker) is highlighted by the MCU in accordance with the directives of the CCS. If the CCS 26 is implemented at an Internet web server 24 the photo image or other identification information of the speaker may selectively be highlighted by the CCS 26. As seen in step 126, FIG. 3B, when the MCU 22, FIG. 1, detects a volume of voice that is lower than the threshold set for silence suppression, the MCU 22 will remove the highlight of the information identifying a speaker at the terminal. If the CCS 26 is at the web server 24, then if the measured volume does not meet the required threshold, the MCU 22 sends an “inactive channel” message to the CCS 26.
The processing then proceeds to step 128, FIG. 3B, to determine if any terminal is connected with the conference call. If a connection to the conference call remains then the processing returns to step 120, FIG. 3B, to monitor the conference in progress. The processing exits at step 130 if it is determined that there is no connection to the conference.
While a detailed description of the preferred embodiments of the invention has been given, it shall be appreciated that many variations can be made thereto without departing from the scope of the invention as set forth in the appended claims.
|Brevet cité||Date de dépôt||Date de publication||Déposant||Titre|
|US5710591 *||27 juin 1995||20 janv. 1998||At&T||Method and apparatus for recording and indexing an audio and multimedia conference|
|US5936662 *||20 mars 1996||10 août 1999||Samsung Electronics Co., Ltd.||Video conference control system using an integrated services digital network|
|US6020915 *||22 oct. 1996||1 févr. 2000||At&T Corp.||Method and system for providing an analog voice-only endpoint with pseudo multimedia service|
|Brevet citant||Date de dépôt||Date de publication||Déposant||Titre|
|US6687358 *||8 mars 2000||3 févr. 2004||Intel Corporation||Method and apparatus for joining a party to a multipoint conference using digital techniques|
|US6754322 *||31 août 1999||22 juin 2004||William Jackson Bushnell||Call me conference call system|
|US6760750 *||28 sept. 2000||6 juil. 2004||Polycom Israel, Ltd.||System and method of monitoring video and/or audio conferencing through a rapid-update web site|
|US6768792 *||17 déc. 2001||27 juil. 2004||International Business Machines Corporation||Identifying call parties to a call to an incoming calling party|
|US6788946 *||12 avr. 2001||7 sept. 2004||Qualcomm Inc||Systems and methods for delivering information within a group communications system|
|US6839417||10 sept. 2002||4 janv. 2005||Myriad Entertainment, Inc.||Method and apparatus for improved conference call management|
|US6850609 *||23 oct. 1998||1 févr. 2005||Verizon Services Corp.||Methods and apparatus for providing speech recording and speech transcription services|
|US6876734 *||29 févr. 2000||5 avr. 2005||Emeeting.Net, Inc.||Internet-enabled conferencing system and method accommodating PSTN and IP traffic|
|US6961416||29 févr. 2000||1 nov. 2005||Emeeting.Net, Inc.||Internet-enabled conferencing system and method accommodating PSTN and IP traffic|
|US7043530 *||30 mars 2001||9 mai 2006||At&T Corp.||System, method and apparatus for communicating via instant messaging|
|US7047030 *||2 mai 2002||16 mai 2006||Symbian Limited||Group communication method for a wireless communication device|
|US7062025||4 avr. 2005||13 juin 2006||Emeeting.Net, Inc.||Internet-enabled conferencing system and method accommodating PSTN and IP traffic|
|US7139379 *||19 juin 2003||21 nov. 2006||International Business Machines Corporation||Monitoring telephone conferences through a network of computer controlled display terminals, each associated with a telephone station and displaying a user-interactive monitoring page|
|US7154999 *||15 oct. 2003||26 déc. 2006||Lucent Technologies Inc.||Sending identification information of a plurality of communication devices that are active on a communication session to information receiving component|
|US7174365 *||8 nov. 2000||6 févr. 2007||Polycom Israel Ltd.||System and method for controlling one or more multipoint control units as one multipoint control unit|
|US7177412 *||20 févr. 2002||13 févr. 2007||Berlyoung Danny L||Multi-media communication management system with multicast messaging capabilities|
|US7184531 *||5 juin 2003||27 févr. 2007||Siemens Communications, Inc.||System and method for authorizing a party to join a conference|
|US7190388 *||22 juin 2001||13 mars 2007||France Telecom||Communication terminal and system|
|US7246151||21 mai 2004||17 juil. 2007||At&T Corp.||System, method and apparatus for communicating via sound messages and personal sound identifiers|
|US7280650 *||28 août 2002||9 oct. 2007||Intel Corporation||Method and apparatus to manage a conference|
|US7298834 *||22 nov. 2002||20 nov. 2007||3Com Corporation||System and method for large capacity conference calls|
|US7305078 *||18 déc. 2003||4 déc. 2007||Electronic Data Systems Corporation||Speaker identification during telephone conferencing|
|US7317791||8 août 2002||8 janv. 2008||International Business Machines Corporation||Apparatus and method for controlling conference call participants|
|US7346654 *||11 avr. 2000||18 mars 2008||Mitel Networks Corporation||Virtual meeting rooms with spatial audio|
|US7499969 *||25 juin 2004||3 mars 2009||Apple Inc.||User interface for multiway audio conferencing|
|US7539290 *||13 mai 2005||26 mai 2009||Verizon Services Corp.||Facilitation of a conference call|
|US7574472||25 sept. 2003||11 août 2009||Polycom, Inc.||System and method of monitoring video and/or audio conferencing through a rapid-update website|
|US7613137||20 nov. 2003||3 nov. 2009||Insors Integrated Communications||Data stream communication|
|US7617094||16 avr. 2003||10 nov. 2009||Palo Alto Research Center Incorporated||Methods, apparatus, and products for identifying a conversation|
|US7627599||10 avr. 2006||1 déc. 2009||Palo Alto Research Center Incorporated||Method, apparatus, and program product for visualizing tree structured information|
|US7639633 *||13 déc. 2004||29 déc. 2009||Nortel Networks Limited||Apparatus and method for setting up a conference call|
|US7698141||16 avr. 2003||13 avr. 2010||Palo Alto Research Center Incorporated||Methods, apparatus, and products for automatically managing conversational floors in computer-mediated communications|
|US7805487||15 févr. 2006||28 sept. 2010||At&T Intellectual Property Ii, L.P.||System, method and apparatus for communicating via instant messaging|
|US7814150 *||3 nov. 2003||12 oct. 2010||Cisco Technology, Inc.||Apparatus and method to bridge telephone and data networks|
|US7822607||10 avr. 2006||26 oct. 2010||Palo Alto Research Center Incorporated||Computer application environment and communication system employing automatic identification of human conversational behavior|
|US7924813 *||18 nov. 2004||12 avr. 2011||A&T Intellectual Property II, LP||System, device, and method for providing data to a call participant|
|US7949116 *||8 déc. 2003||24 mai 2011||Insors Integrated Communications||Primary data stream communication|
|US7995732 *||4 oct. 2007||9 août 2011||At&T Intellectual Property I, Lp||Managing audio in a multi-source audio environment|
|US8000319||15 janv. 2010||16 août 2011||Polycom, Inc.||Multipoint multimedia/audio conference using IP trunking|
|US8010575||30 nov. 2009||30 août 2011||Palo Alto Research Center Incorporated||System and method for redistributing interest in a hierarchical data structure representation|
|US8041800 *||8 nov. 2005||18 oct. 2011||International Business Machines Corporation||Automatic orchestration of dynamic multiple party, multiple media communications|
|US8126705||9 nov. 2009||28 févr. 2012||Palo Alto Research Center Incorporated||System and method for automatically adjusting floor controls for a conversation|
|US8166102||17 févr. 2005||24 avr. 2012||Alcatel Lucent||Signaling method for internet telephony|
|US8218457 *||29 sept. 2004||10 juil. 2012||Stmicroelectronics Asia Pacific Pte. Ltd.||Apparatus and method for providing communication services using multiple signaling protocols|
|US8270585 *||4 nov. 2003||18 sept. 2012||Stmicroelectronics, Inc.||System and method for an endpoint participating in and managing multipoint audio conferencing in a packet network|
|US8295462||8 mars 2008||23 oct. 2012||International Business Machines Corporation||Alerting a participant when a topic of interest is being discussed and/or a speaker of interest is speaking during a conference call|
|US8345082 *||8 oct. 2009||1 janv. 2013||Cisco Technology, Inc.||System and associated methodology for multi-layered site video conferencing|
|US8370430 *||9 avr. 2002||5 févr. 2013||Siemens Enterprise Communications Gmbh & Co., Kg||Method for interchanging messages and information during a telephone conference|
|US8422406||18 juil. 2008||16 avr. 2013||Vodafone Group Plc||Identifying callers in telecommunications networks|
|US8463600||27 févr. 2012||11 juin 2013||Palo Alto Research Center Incorporated||System and method for adjusting floor controls based on conversational characteristics of participants|
|US8472900 *||20 sept. 2006||25 juin 2013||Nokia Corporation||Method and system for enhancing the discontinuous transmission functionality|
|US8498389 *||21 mai 2009||30 juil. 2013||Verizon Services Corp.||Facilitation of a conference call|
|US8660251||12 juil. 2012||25 févr. 2014||International Business Machines Corporation||Alerting a participant when a topic of interest is being discussed and/or a speaker of interest is speaking during a conference call|
|US8676572||14 mars 2013||18 mars 2014||Palo Alto Research Center Incorporated||Computer-implemented system and method for enhancing audio to individuals participating in a conversation|
|US8760487||12 oct. 2010||24 juin 2014||Cisco Technology, Inc.||Apparatus and method to bridge telephone and data networks|
|US8780765 *||13 juin 2011||15 juil. 2014||Tencent Technology (Shenzhen) Company Limited||Method, system and peer apparatus for implementing multi-channel voice mixing|
|US8786668 *||28 févr. 2012||22 juil. 2014||Lifesize Communications, Inc.||Sharing participant information in a videoconference|
|US8811638||1 déc. 2011||19 août 2014||Elwha Llc||Audible assistance|
|US8843550||18 déc. 2006||23 sept. 2014||Polycom Israel Ltd.||System and method for controlling one or more multipoint control units as one multipoint control unit|
|US8886719||1 mai 2007||11 nov. 2014||Skype||Group communication system and method|
|US8934652||13 déc. 2011||13 janv. 2015||Elwha Llc||Visual presentation of speaker-related information|
|US8976218 *||27 juin 2011||10 mars 2015||Google Technology Holdings LLC||Apparatus for providing feedback on nonverbal cues of video conference participants|
|US8994779 *||20 juil. 2012||31 mars 2015||Net Power And Light, Inc.||Information mixer and system control for attention management|
|US9025750 *||16 nov. 2007||5 mai 2015||Avaya Inc.||Method and apparatus for determining and utilizing local phone topography|
|US9042536||10 août 2012||26 mai 2015||International Business Machines Corporation||Progressive, targeted, and variable conference feedback tone|
|US9049033 *||28 mars 2012||2 juin 2015||Net Power And Light, Inc.||Information mixer and system control for attention management|
|US9053096||29 déc. 2011||9 juin 2015||Elwha Llc||Language translation based on speaker-related information|
|US9064152||28 févr. 2012||23 juin 2015||Elwha Llc||Vehicular threat detection based on image analysis|
|US9077848||15 juil. 2011||7 juil. 2015||Google Technology Holdings LLC||Side channel for employing descriptive audio commentary about a video conference|
|US9081485||2 sept. 2011||14 juil. 2015||Broadnet Teleservices. LLC||Conference screening|
|US9107012||31 janv. 2012||11 août 2015||Elwha Llc||Vehicular threat detection based on audio signals|
|US20020151321 *||12 avr. 2001||17 oct. 2002||Diane Winchell||Systems and methods for delivering information within a group communications system|
|US20040068736 *||22 juin 2001||8 avr. 2004||Lafon Michel Beaudouin||Communication terminal and system|
|US20040085914 *||30 oct. 2003||6 mai 2004||Baxley Warren E.||Large-scale, fault-tolerant audio conferencing in a purely packet-switched network|
|US20040137882 *||2 mai 2002||15 juil. 2004||Forsyth John Matthew||Group communication method for a wireless communication device|
|US20040148340 *||29 janv. 2003||29 juil. 2004||Web.De Ag||Web site having a zone layout|
|US20040172252 *||16 avr. 2003||2 sept. 2004||Palo Alto Research Center Incorporated||Methods, apparatus, and products for identifying a conversation|
|US20040215728 *||21 mai 2004||28 oct. 2004||Ellen Isaacs||System, method and apparatus for communicating via sound messages and personal sound identifiers|
|US20040236593 *||20 nov. 2003||25 nov. 2004||Insors Integrated Communications||Data stream communication|
|US20040246332 *||5 juin 2003||9 déc. 2004||Siemens Information And Communication Networks, Inc||System and method for authorizing a party to join a conference|
|US20040249967 *||8 déc. 2003||9 déc. 2004||Insors Integrated Communications||Primary data stream communication|
|US20040258222 *||19 juin 2003||23 déc. 2004||International Business Machines Corporation||Monitoring telephone conferences through a network of computer controlled display terminals, each associated with a telephone station and displaying a user-interactive monitoring page|
|US20050018828 *||25 juil. 2003||27 janv. 2005||Siemens Information And Communication Networks, Inc.||System and method for indicating a speaker during a conference|
|US20050058275 *||12 sept. 2003||17 mars 2005||Jun Shi||Audio source identification|
|US20050083941 *||15 oct. 2003||21 avr. 2005||Florkey Cynthia K.||Sending identification information of a plurality of communication devices that are active on a communication session to information receiving component|
|US20050094580 *||4 nov. 2003||5 mai 2005||Stmicroelectronics Asia Pacific Pte., Ltd.||System and method for an endpoint participating in and managing multipoint audio conferencing in a packet network|
|US20050101308 *||16 août 2004||12 mai 2005||Samsung Electronics Co., Ltd.||Mobile station and a method for controlling the mobile station in conferencing mode for use in mobile communication system|
|US20050135583 *||18 déc. 2003||23 juin 2005||Kardos Christopher P.||Speaker identification during telephone conferencing|
|US20050147086 *||17 févr. 2005||7 juil. 2005||Rosenberg Jonathan D.||Signaling method for Internet telephony|
|US20050165894 *||17 févr. 2005||28 juil. 2005||Rosenberg Jonathan D.||Signaling method for Internet telephony|
|US20050165934 *||17 févr. 2005||28 juil. 2005||Rosenberg Jonathan D.||Signaling method for Internet telephony|
|US20050180342 *||4 avr. 2005||18 août 2005||Emeeting.Net, Inc.||Internet-enabled conferencing system and method accommodating PSTN and IP traffic|
|US20050207554 *||13 mai 2005||22 sept. 2005||Verizon Services Corp.||Facilitation of a conference call|
|US20050271194 *||7 juin 2004||8 déc. 2005||Woods Paul R||Conference phone and network client|
|US20050286496 *||29 sept. 2004||29 déc. 2005||Stmicroelectronics Asia Pacific Pte. Ltd.||Apparatus and method for providing communication services using multiple signaling protocols|
|US20060020967 *||26 juil. 2004||26 janv. 2006||International Business Machines Corporation||Dynamic selection and interposition of multimedia files in real-time communications|
|US20100171807 *||8 oct. 2009||8 juil. 2010||Tandberg Telecom As||System and associated methodology for multi-layered site video conferencing|
|US20100233993 *||16 sept. 2010||Qualcomm Incorporated||System for collecting billable information in a group communication network|
|US20110069140 *||24 mars 2011||Verizon Services Corp.||Facilitation of a conference call|
|US20110246191 *||6 oct. 2011||Tencent Technology (Shenzhen) Company Limited||Method, system and peer apparatus for implementing multi-channel voice mixing|
|US20120176467 *||28 févr. 2012||12 juil. 2012||Kenoyer Michael L||Sharing Participant Information in a Videoconference|
|US20120249719 *||28 mars 2012||4 oct. 2012||Net Power And Light, Inc.||Information mixer and system control for attention management|
|US20120327180 *||27 juin 2011||27 déc. 2012||Motorola Mobility, Inc.||Apparatus for providing feedback on nonverbal cues of video conference participants|
|US20130021431 *||24 janv. 2013||Net Power And Light, Inc.||Information mixer and system control for attention management|
|US20130144619 *||23 janv. 2012||6 juin 2013||Richard T. Lord||Enhanced voice conferencing|
|US20140192138 *||10 mars 2014||10 juil. 2014||Logitech Europe S.A.||Displaying Participant Information in a Videoconference|
|US20140253669 *||6 mars 2014||11 sept. 2014||Samsung Electronics Co., Ltd.||Conference call terminal and method for operating user interface thereof|
|US20150025888 *||22 oct. 2013||22 janv. 2015||Nuance Communications, Inc.||Speaker recognition and voice tagging for improved service|
|CN100512558C||29 sept. 2004||8 juil. 2009||三星电子株式会社||Mobile station and a method for controlling the mobile station in conferencing mode of mobile communication system|
|CN101867768A *||31 mai 2010||20 oct. 2010||杭州华三通信技术有限公司||Picture control method and device for video conference place|
|CN101867768B||31 mai 2010||8 févr. 2012||杭州华三通信技术有限公司||Picture control method and device for video conference place|
|EP1383272A1 *||19 juil. 2002||21 janv. 2004||Web. De AG||Communications environment comprising a telecommunications web site|
|EP1530352A1 *||17 sept. 2004||11 mai 2005||Samsung Electronics Co., Ltd.||A mobile station and a method for controlling the mobile station in conferencing mode for use in mobile communication system|
|WO2004014054A1 *||25 juil. 2003||12 févr. 2004||Collabo Technology Inc||Method and apparatus for identifying a speaker in a conferencing system|
|WO2004025941A2 *||10 sept. 2003||25 mars 2004||Brian Elan Lee||Conference call management with speaker designation_____|
|WO2009030128A1 *||26 juin 2008||12 mars 2009||Jiangping Feng||A method and media server of obtaining the present active speaker in conference|
|Classification aux États-Unis||379/202.01, 348/14.08, 379/204.01|
|Classification internationale||H04M7/12, H04M7/00, H04M3/56|
|Classification coopérative||H04M3/567, H04M7/12, H04M3/569, H04M7/006, H04M2201/40, H04M2201/42|
|Classification européenne||H04M3/56P2, H04M3/56M|
|21 déc. 1998||AS||Assignment|
Owner name: LUCENT TECHNOLOGIES INC., NEW JERSEY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHANG, YOUNG-FU;REEL/FRAME:009673/0699
Effective date: 19981218
|23 mars 2005||FPAY||Fee payment|
Year of fee payment: 4
|9 avr. 2009||FPAY||Fee payment|
Year of fee payment: 8
|14 mars 2013||FPAY||Fee payment|
Year of fee payment: 12