US9270941B1 - Smart video conferencing system - Google Patents

Smart video conferencing system Download PDF

Info

Publication number
US9270941B1
US9270941B1 US14/659,373 US201514659373A US9270941B1 US 9270941 B1 US9270941 B1 US 9270941B1 US 201514659373 A US201514659373 A US 201514659373A US 9270941 B1 US9270941 B1 US 9270941B1
Authority
US
United States
Prior art keywords
video
video data
data
physical environment
captured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/659,373
Inventor
Mark Lavelle
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Logitech Europe SA
Original Assignee
Logitech Europe SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Logitech Europe SA filed Critical Logitech Europe SA
Priority to US14/659,373 priority Critical patent/US9270941B1/en
Assigned to LOGITECH EUROPE S.A. reassignment LOGITECH EUROPE S.A. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LAVELLE, MARK
Application granted granted Critical
Publication of US9270941B1 publication Critical patent/US9270941B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders

Definitions

  • the present invention relates to video cameras, and in particular to a smart video conferencing system that controls a video camera based on measures of activity within video data captured using another video camera.
  • Video conferencing has become more popular in recent years, thanks in large part to proliferation of high speed Internet and price reductions in camera equipment.
  • dedicated video conferencing locations exist where rooms and technological resources are dedicated solely to the task of video conferencing.
  • server-side logic may be provided that is capable of dynamically switching between the video feeds of the various cameras when determining which video data to display at a remote video conferencing site.
  • many modern instant messaging software applications support voice and video chatting, where the participants can view each other while talking.
  • video cameras are devices that can be configured to capture frames in a sequential manner using an image sensor. Additionally, a number of optimization operations can be performed on the captured frames in order to improve the quality of the video data. For instance, pixel correction operations can be performed on each captured frame, where bad pixel information is used to correct for hot or dead pixels. Additionally, auto focus operations can be performed, where a frame(s) is analyzed to determine whether the lens needs to be adjusted to achieve a more optimal focus. Upon determining that a lens adjustment is necessary, a feedback signal could be sent to motors or actuators to adjust the focal position of the lens. Additionally, color processing operations can be performed, where the frames are analyzed to determine if any color corrections are necessary. Such color corrections could include, for example, gamma correction, white balance correction and exposure correction.
  • the optimized video data can be encoded into a suitable format.
  • the encoding format used can depend on the available network bandwidth and the application in question. For example, a dedicated video conference environment with a high bandwidth network connection could encode captured video data at a relatively high bit rate, while a video conference application on a mobile phone or tablet with a more limited network connection could encode the captured video data at a lower bit rate.
  • the encoded video data can then be transmitted to the remote site for display via a communications network (e.g., the Internet).
  • One embodiment presented in this disclosure provides a method of facilitating transmission of a video stream from a first video conferencing device to a remote video conferencing device.
  • the method includes receiving, by the first video conferencing endpoint device, first video data captured from a first field of view of a physical environment.
  • the video data includes a plurality of frames.
  • the method also includes determining activity data for portions of the first video data across the plurality of frames.
  • the method includes generating, by a first video conferencing endpoint device, second video data from a second field of view of the physical environment, based on the determined activity data.
  • the method further includes facilitating transmission of the video stream to the remote video conferencing device for display, where the video stream includes the generated second video data and audio data captured within the physical environment.
  • Embodiments of the disclosure may further provide a method of generating a video stream for use in a video conference, comprising receiving first video data captured from a first field of view of a physical environment, the first video data comprising a plurality of frames and determining activity data from portions of the first video data using information provided in the plurality of frames. Then generating second video data from a second field of view of the physical environment, based on the determined activity data, and generating a video stream that comprises the generated second video data and audio data captured within the physical environment.
  • the system includes a first camera sensor configured to capture first video data comprising a plurality of frames from a first field of view of a physical environment. Additionally, the system includes a second camera sensor and a mounting structure capable of adjusting an orientation of the second camera sensor along one or more degrees of freedom. The system further includes control logic configured to determine activity data for portions of the video data across the plurality of frames and control movement of the mounting structure to adjust the orientation of the second camera along the one or more degrees of freedom, based on the determined activity data.
  • the system also includes video processing logic configured to capture video data from a second field of view of the physical environment using the second camera sensor, encode the captured video data, facilitate transmission of the video stream to the remote video conferencing device for display, the video stream comprising the generated second video data and audio data captured within the physical environment.
  • Embodiments of the disclosure may further provide a system for generating a video stream for use in a video conference.
  • the system includes a first camera sensor configured to capture first video data comprising a plurality of frames from a first field of view of a physical environment. Additionally, the system includes a second camera sensor and a mounting structure capable of adjusting an orientation of the second camera sensor along one or more degrees of freedom.
  • the system further includes control logic configured to determine activity data for portions of the video data across the plurality of frames and control movement of the mounting structure to adjust the orientation of the second camera along the one or more degrees of freedom, based on the determined activity data.
  • the system also includes video processing logic configured to capture video data from a second field of view of the physical environment using the second camera sensor, encode the captured video data, and generate a video stream comprising the generated second video data and audio data captured within the physical environment.
  • Embodiments of the disclosure may further provide a system for facilitating transmission of a video stream from a first video conferencing device to a remote video conferencing device.
  • the system includes a camera sensor configured to capture first video data comprising a plurality of frames from a first field of view of a physical environment at a first resolution.
  • the system also includes control logic configured to determine activity data for portions of the first video data across the plurality of frames, determine a portion of the captured first video data to extract, based on the determined activity data and extract the portion of the captured video data to create second video data.
  • the second video data has a second resolution that is less than the first resolution of the captured video data.
  • the system further includes video processing logic configured to facilitate transmission of the video stream to the remote video conferencing device for display.
  • the video stream includes the generated second video data and audio data captured within the physical environment.
  • Embodiments of the disclosure may further provide a system for generating a video stream for use in a video conference.
  • the system includes a camera sensor configured to capture first video data comprising a plurality of frames from a first field of view of a physical environment at a first resolution.
  • the system also includes control logic configured to determine activity data for portions of the first video data across the plurality of frames, determine a portion of the captured first video data to extract, based on the determined activity data and extract the portion of the captured video data to create second video data.
  • the second video data has a second resolution that is less than the first resolution of the captured video data.
  • the system further includes video processing logic configured to generate a video stream that includes the generated second video data and audio data captured within the physical environment.
  • FIG. 1 is a diagram illustrating a video conferencing system for two video conferencing endpoints, according to one embodiment described herein.
  • FIG. 2 is a block diagram illustrating a video conferencing system for use as an endpoint to a video conference, according to one embodiment described herein.
  • FIG. 3 illustrates extracting video stream data from within a higher resolution video stream, according to one embodiment described herein.
  • FIG. 4 is a flow diagram illustrating a method for controlling a camera to capture video data based on activity detected within video data captured by another camera, according to one embodiment described herein.
  • FIG. 5 is a flow diagram illustrating a method for controlling a camera based on supplemental video streams and detected audio data, according to one embodiment described herein.
  • FIG. 6 is a flow diagram illustrating a method for creating a video stream by extracting a portion of a higher resolution video stream, according to one embodiment described herein.
  • FIG. 7 is a diagram illustrating a video conferencing environment that includes a video conferencing apparatus, according to one embodiment described herein.
  • FIG. 8 is a diagram illustrating a video conferencing environment that includes multiple video conferencing devices operating in a master-slave relationship, according to one embodiment described herein.
  • inexpensive video conferencing systems exist but are predominantly limited to transmitting only a single video stream across the network. While this lowers the network requirements for using such video conferencing systems, it also limits what can be displayed at the remote video conferencing site to the single video stream. Thus, for video captured in a conference room environment, the remote site may simply see an overall view of the conference room environment, and any adjustments to the camera (e.g., pan, tilt and zoom operations) must be made manually by a user.
  • embodiments provide a video conferencing endpoint system capable of manipulating a field of view of captured video content that is transmitted to a remote video conferencing endpoint device, based on measures of activity between frames of captured video content. For instance, embodiments may receive captured video data having a first field of view of a physical environment. Generally, the captured video data includes a sequential plurality of frames captured using a camera sensor. In one embodiment, the video data is captured using a wide angle camera sensor.
  • Embodiments may then analyze the captured video data to determine a plurality of measures of activity for portions of the video data across the plurality of frames.
  • the measures of activity correspond to types of activities that are desirable to include in the video stream transmitted to the remote video conferencing site.
  • embodiments could analyze the captured video data to determine which of a plurality of users depicted within the video data is currently speaking. In doing so, embodiments could first perform a facial recognition analysis on the frames of the video data and could then determine a measure of motion for each detected face within the frames of the video data. Embodiments could then determine which measure of motion is most indicative of a user speaking.
  • Embodiments could then use the determined measures of activity to generate stream video data having a second field of view of the physical environment. For instance, embodiments could control the movement of a second camera device to capture stream video data that is substantially centered on a physical entity within the physical environment, based on the determined measures of activity.
  • logic for the video conferencing system could determine which user depicted within the captured video data is currently speaking and could control the movement of the second camera device (e.g., pan, tilt and zoom of the second camera device) to capture stream video data that is substantially centered on the user who is currently speaking.
  • the stream video data could then be transmitted over a network to a remote video conferencing device for display. Doing so provides an intelligent video conferencing system that is capable of dynamically adjusting the video stream that is transmitted to the remote video conferencing device.
  • FIG. 1 is a diagram illustrating a video conferencing system for two video conferencing endpoints, according to one embodiment described herein.
  • the system 100 includes a video conferencing endpoint 110 and a video conferencing endpoint 140 , interconnected via a network 135 .
  • the network 135 generally represents any data communications network suitable for the transmission of video and audio data (e.g., the Internet).
  • each of the video conferencing endpoints 140 includes one or more display devices for at least displaying received video and audio data and video and audio capture devices (e.g., pan and tilt camera 120 ) for capturing video data to send to the other video conferencing endpoints 110 , 140 .
  • the video conferencing endpoint 140 could represent a video conferencing software application (e.g., Microsoft® Skype®) executing on a mobile computing device that transmits captured video and audio data across the network 135 for display at the video conferencing endpoint 110 and that displays video and audio data received from the video conferencing endpoint 110 .
  • the video conferencing endpoint 140 could represent a dedicated video conferencing environment in which multiple camera devices are permanently installed. More generally, the video conferencing endpoint 140 represents any device(s) that are suitable to participate in a video conference, or even just displaying the received video and audio data.
  • video stream data may include one or more captured audio streams as well (e.g., captured using one or more microphone devices within the physical environment).
  • captured audio data can be transmitted as well using any suitable technique for representing and transmitting audio data.
  • the video conferencing endpoint 110 includes a wide angle camera device 115 , a pan and tilt camera device 120 , a control device 130 , one or more microphones 131 and a user system 125 .
  • the wide angle camera device 115 is configured to capture a video stream of the physical environment, and preferably is positioned so that all users within the physical environment are depicted within the captured video stream.
  • the pan and tilt camera device 120 is capable of adjusting the orientation of a camera sensor within the device 120 , across multiple degrees of freedom. Additionally, the pan and tilt camera device 120 may be capable of zoom functionality, e.g., hardware zoom or software zoom. While the system 100 , shown in FIG.
  • the system 100 may include a single enclosure, such as a video conferencing apparatus 720 ( FIG. 7 ) discussed below, that includes the control device 130 and one or more camera devices (e.g., wide angle camera device 115 and one or more pan and tilt camera device 120 ).
  • the system 100 may include a single enclosure that includes the control device 130 , one or more microphones 131 and the wide angle camera device 115 .
  • the system 100 may include a single enclosure that includes the control device 130 , one or more microphones 131 , the wide angle camera device 115 and the tilt camera device 120 . In some cases, it is desirable to position the camera devices so that they face the same direction (e.g., front facing) and are oriented so that their fields-of-view completely overlap.
  • the user device 125 represents any computing device capable of transmitting a video stream to a remote video conferencing device (e.g., video conferencing endpoint 140 ) over the network 135 .
  • a remote video conferencing device e.g., video conferencing endpoint 140
  • the user device 125 include personal computing devices, tablet computing devices, mobile devices and so on.
  • the user device 125 can execute one or more video conferencing software applications capable of receiving video data from the control device 130 (e.g., captured using the pan and tile camera device 120 ) and transmitting the captured video data to the video conferencing endpoint 140 , via the network 135 .
  • video conferencing applications include, without limitation, Microsoft® Skype® and Apple® FaceTime®.
  • any video conferencing application capable of receiving video data and transmitting the video data to a remote site can be used, consistent with the functionality described herein.
  • the user device 125 may display video data captured at and received from the remote video conferencing endpoint 140 , e.g., on a native display device of the user device 125 or on a separate display device (not shown) within the physical environment.
  • the control device 130 is generally configured to adjust and orientation of the pan and tilt camera device 120 , based on detected measures of activity within a video stream captured by the wide angle camera device 115 .
  • the control device 130 could analyze the video stream captured by the wide angle camera device 115 to determine which user depicted within the video data is currently speaking. The control device 130 could then adjust the orientation of the pan and tilt camera device 120 , so that the video stream captured by the pan and tilt camera device 120 is substantially centered on the determined user that is currently speaking within the physical environment.
  • the control device 130 can also adjust the zoom level of the pan and tilt camera device 120 , so that the user who is currently speaking is depicted at a predetermined size within the video stream captured by the pan and tilt camera device 120 . For example, the control device 130 could adjust the zoom level such that the user occupies 70% of the captured video frames.
  • the control device 130 could then transmit the video data captured using the pan and tilt camera 120 and the microphone 131 to the user device 125 .
  • the user device 125 Upon receiving the video data, the user device 125 could transmit the video data to the video conferencing endpoint 140 for display, using the network 135 .
  • the control device 130 provides intelligent camera control for video conferencing applications on the user device 125 , without requiring the transmission of multiple video streams across the network 135 .
  • FIG. 2 is a block diagram illustrating a video conferencing system for use as an endpoint to a video conference, according to one embodiment described herein.
  • the system 200 could be used as either the video conferencing endpoint 110 and/or 140 and may contain one or more of the related components shown in FIG. 1 .
  • the video conferencing system 200 includes a wide angle camera sensor 210 , pan and tilt camera sensor 220 , camera controller component 230 , video streaming component 240 , and microphone sensors 260 .
  • the wide angle camera sensor 210 represents a camera having a wide angle lens that is configured to capture video data of the physical environment.
  • such a wide angle camera sensor 210 is positioned so that all or substantially all users within the physical environment, and in front of the camera, are depicted within the captured video.
  • the wide angle camera sensor 210 may have a field of view that has a viewing angle that is between about 70 degrees and about 180 degrees, such as a viewing angle of about 130 degrees.
  • the wide angle camera sensor 210 is capable of providing the captured video data to the camera controller component 230 .
  • the camera controller component 230 could use such data to control movement of the pan and tilt camera sensor 220 , create the video stream to transmit to a remote video conferencing endpoint.
  • the pan and tilt camera sensor 220 represents a camera sensor affixed to a mounting structure that is capable of moving in one or more degrees of freedom.
  • the mounting structure may be capable of moving such that an orientation of the pan and tilt camera sensor 220 changes in the horizontal and/or vertical directions.
  • the pan and tilt camera sensor 220 can be capable of zoom functionality (e.g., hardware zoom, software zoom, etc.).
  • the pan and tilt camera sensor 220 is capable of providing the captured video data to the camera controller component 230 .
  • the camera controller component 230 could use the data from the pan and tilt camera sensor 220 in creating the video stream to transmit to the remote video conferencing endpoint.
  • the pan and tilt camera sensor 220 may have a field of view that has a viewing angle that is between about 5 degrees and about 80 degrees, such as a viewing angle of about 50 degrees.
  • the camera controller component 230 is generally configured to adjust the orientation of the pan and tilt camera sensor 220 , based on detected measures of activity within video data captured by the wide angle camera sensor 210 . For example, the camera controller component 230 could analyze the received video data captured using the wide angle camera sensor 210 to detect which user within the physical environment is currently speaking. The camera controller component 230 could then adjust the orientation of the pan and tilt camera sensor 220 , such that the user currently speaking is substantially centered within video data captured using the pan and tilt camera sensor 220 .
  • the camera controller component 230 could analyze the video data captured using the wide angle camera sensor 210 and could determine that a particular user has moved to a predefined location within the physical environment (e.g., within a predefined distance of a whiteboard within the physical environment). The camera controller component 230 could then adjust the orientation of the pan and tilt camera sensor 220 , such that the particular user is substantially centered or desirably positioned within the video data captured using the pan and tilt camera sensor 220 .
  • the camera controller component 230 is a general use computing device that includes a processor 218 , a memory unit 214 , input/output (I/O) devices 216 and a communications device 250 .
  • the memory unit 214 is generally included to be representative of a random access memory.
  • the memory unit 214 represents any memory sufficiently large to hold the necessary programs and data structures.
  • Memory unit 214 could be one or a combination of memory devices, including Random Access Memory, nonvolatile or backup memory (e.g., programmable or Flash memories, read-only memories, etc.).
  • memory unit 214 may be considered to include memory physically located elsewhere, for example, on another computer communicatively coupled to the camera controller device.
  • the memory unit 214 includes an operating system 219 , one or more software applications 215 , stored media data 217 and a model structure 213 .
  • the model structure 213 describes attributes of the physical environment in which the video conferencing system 200 is located (e.g., a make-up and arrangement of physical entities within the physical environment).
  • Examples of operating system 219 include UNIX, a version of the Microsoft Windows® operating system, and distributions of the Linux® operating system. Additional examples of operating system 219 include custom operating systems for gaming consoles, including the custom operating systems for systems such as the Nintendo DS® and Sony PSP®.
  • Processor 218 may be a hardware unit or combination of hardware units capable of executing software applications and processing data.
  • the processor 218 includes a central processing unit (CPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), and/or a combination of such units.
  • the processor 218 retrieves and executes programming instructions stored in the memory unit 214 .
  • Processor 218 is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, GPUs having multiple execution paths, and the like.
  • the processor 218 could execute the one or more software applications 215 and process the stored media data 217 , which are each included within memory unit 214 .
  • the video streaming component 240 is generally configured to facilitate the transfer of video data (e.g., video data captured using the pan and tilt camera sensor 220 ) to a remote video conferencing device for display (e.g., using a communications device 250 such as a network interface controller).
  • video data e.g., video data captured using the pan and tilt camera sensor 220
  • a remote video conferencing device for display e.g., using a communications device 250 such as a network interface controller
  • the video streaming component 240 can encode the captured video data in an encoding format and at a bit rate suitable for the video conferencing session and deliver the video data to the user device 125 via a wireless (e.g., WiFi, Bluetooth®, etc.) or wired connection (e.g., USB connection).
  • a wireless e.g., WiFi, Bluetooth®, etc.
  • wired connection e.g., USB connection
  • the user device 125 could then transmit the video data to a remote video conferencing device using conventional communication devices and protocols (e.g., network interface card, Ethernet card, modem, wireless network hardware or other conventional computing device communication hardware).
  • the video streaming component 240 provides the video data to a video streaming application on the user device 125 , such as, without limitation, Microsoft® Skype® and Apple® FaceTime®, using communication device 250
  • the encoding format used by the video streaming component 240 could be determined based on the capabilities or specified preferences of the user device 125 .
  • a suitable video encoding format is H.264 (and variants such as H.264/AVC, H.264 High Profile and H.264 SVC). More generally, however, any suitable encoding format can be used, consistent with the present disclosure.
  • a particular video streaming application on the user device 125 could be configured to transmit streaming video data in a particular encoding format, and thus the video streaming component 240 could be configured to encode the generated video data using the particular encoding format for delivery to the streaming application on the user device.
  • the remote video conferencing device may be capable of receiving the encoded format from the user device and output the received data on a display.
  • the encoding format can be specified by a user of the video conferencing system 200 .
  • the user device 125 or a video streaming application executing on the user device
  • the camera controller component 230 can adjust the orientation of the pan and tilt camera sensor 220 in order to intelligently capture video data within the physical environment and the camera controller component 230 can then transmit the captured video data to the video conferencing application on the user device (e.g., over a wireless network connection, over a wired connection, etc.).
  • the video conferencing application could then perform any needed encoding operations on the received video data (e.g., transcoding the received video data from a first format to another format used by the video conferencing application) and could then transmit the video data to a remote video conferencing device for display.
  • the camera controller component 230 provides intelligent video conferencing services for existing video conferencing applications of the user device, thereby improving the performance of the video conferencing applications on the user device without any modifications to the video conferencing applications.
  • the camera controller component 230 is configured to use the microphone sensors 260 in order to determine an orientation for the pan and tilt camera sensor 220 .
  • the microphone sensors 260 could represent a microphone array (e.g., two or more microphone sensor devices operating in tandem) and the camera controller component 230 could include logic to determine a direction from which particular audio content was received by the microphone sensors 260 within the microphone array.
  • the camera controller component 230 could analyze audio data collected by the microphone sensors 260 within the microphone array and could determine that a portion of the audio data matches a predefined signature corresponding to user speech data. The camera controller component 230 could then determine a direction from which the user speech data originated, relative to a physical position of the microphone sensors 260 .
  • the camera controller component 230 could then use the determined direction to orient the pan and tilt camera sensor 220 , such that the pan and tilt camera sensor 220 is oriented in substantially the determined direction from which the user speech data originated. By doing so, the camera controller component 230 could capture video data of the user who is currently speaking within the physical environment. As discussed above, in doing so, the camera controller component 230 can also alter a zoom level of the pan and tilt camera sensor 220 , so that the user that is speaking is predominantly featured within the captured video data. For example, the camera controller component 230 could alter the zoom of the pan and tilt camera sensor 220 until the speaking user occupies a predefined portion of the captured frames of video data.
  • the camera controller component 230 is configured to use a single high-resolution camera to capture a video stream that is at least partially transmitted to the remote video conferencing device.
  • FIG. 3 illustrates extracting a video data from within a higher resolution video stream, according to one embodiment described herein.
  • the screenshot illustrates a plurality of users within a video conferencing environment.
  • the user 305 is currently speaking.
  • the camera controller component 230 could perform an analysis of the captured video data to determine a physical entity (e.g., a particular user) to be the focus of the stream video data transmitted to the remote video conferencing device. For instance, the camera controller component 230 could determine that the frames should focus on the user 305 who is currently speaking within the physical environment.
  • a physical entity e.g., a particular user
  • the camera controller component 230 could determine a portion of the captured video data to extract as stream video data to be transmitted to the remote video conferencing device. For example, in the illustration 300 , the camera controller component 230 has determined the region 310 around the user 305 , and the controller component 230 could extract the region 310 from each frame of the captured video data to create a stream of video data. In doing so, the camera controller component 230 can dynamically adjust the region 310 for each frame of the captured video, so that the user 305 is substantially centered within each frame of the stream video data. Doing so allows a single camera to be used for both the detection of activity within the physical environment and the capture of video data to be streamed to the remote video conferencing device.
  • the stream video data is a lower resolution video stream than the captured video data.
  • the captured video data could have a resolution of 3840 ⁇ 2160 pixels
  • the stream video data extracted from the captured video data could have a resolution of 1024 ⁇ 768 pixels.
  • Such an embodiment may be preferable, for instance, because many modern devices have the computing resources and network resources to support the transmission and display of video data with a resolution of 1024 ⁇ 768 pixels, while substantially fewer devices today may be capable of supporting (i.e., with sufficient computing and network resources) a video streaming having a resolution of 3840 ⁇ 2160 pixels.
  • the stream video data is sent to the external endpoint from the user device 125 in a 1080p resolution (or other desirable resolution) using typical computing and network resources.
  • the stream video data can be sent to the user device 125 in a 1080p resolution (or other desirable resolution), via the video conferencing system (e.g., system 200 ), so that the user device 125 can then send the stream video data to the external endpoint.
  • the video conferencing system e.g., system 200
  • any suitable resolutions for the captured video data and the stream video data can be used, consistent with the present disclosure.
  • FIG. 4 is a flow diagram illustrating a method for controlling a camera to capture video data based on activity detected within video data captured by another camera, according to one embodiment described herein.
  • the method 400 begins at block 410 , where a wide-angle camera sensor captures a wide-angle video stream of a video conferencing environment.
  • the camera controller component 230 analyzes the wide-angle video stream to detect a region of activity within the frames of the video stream (block 415 ).
  • examples of such activity include (without limitation) a user currently speaking and a user moving to a predefined location within the physical environment (e.g., a whiteboard).
  • the camera controller component 230 then manipulates a second camera device (e.g., pan and tilt camera sensor 220 ) to capture a video stream corresponding to the detected region of activity (block 420 ).
  • a second camera device e.g., pan and tilt camera sensor 220
  • the camera controller component 230 could manipulate the position of the second camera device such that the second camera device is substantially oriented in the direction of the particular user to collect video data of the user.
  • a video streaming component 240 can then receive the video data captured using the second camera device and can facilitate the transfer of the captured video data to a remote video conferencing endpoint for display (block 425 ), and the method 400 ends.
  • the video streaming component 240 could provide the captured video data to a streaming application on a user device 125 , which in turn transmits the video data to the remote videoconferencing endpoint (e.g., via the Internet).
  • a streaming application on a user device 125
  • the remote videoconferencing endpoint e.g., via the Internet.
  • the video streaming component 240 could provide the captured video data to a streaming application on a user device 125 , which in turn transmits the video data to the remote videoconferencing endpoint (e.g., via the Internet).
  • a streaming application e.g., a user who is currently speaking
  • the video streaming component 240 could provide the captured video data to a streaming application on a user device 125 , which in turn transmits the video data to the remote videoconferencing endpoint (e.g., via the Internet).
  • an intelligent video conferencing system that is capable of providing a video stream that selectively focuses on physical elements within the video conferencing environment (e.g., a user who is currently speaking), without
  • FIG. 5 is a flow diagram illustrating a method for controlling a camera based on supplemental video streams and detected audio data, according to one embodiment described herein.
  • the method 500 begins at block 510 , where the camera controller component 230 analyzes one or more supplemental video streams to determine a respective one or more measures of activity for regions of the supplemental video streams.
  • one of the supplemental video streams could be the video stream captured using a wide-angle camera sensor.
  • additional camera sensors may be provided, such as a camera sensor oriented substantially in the direction of a predefined area of interest (e.g., a whiteboard within the video conferencing environment).
  • the camera controller component 230 can analyze the frames of each of these video streams in order to determine an orientation for a pan and tilt camera device that captures video data to be transmitted to a remote video conferencing device.
  • the camera controller component 230 collects audio data using microphones sensors within the physical environment (block 515 ) and determines a direction from which at least a portion of the audio data originated (block 520 ). For instance, as discussed above, and microphone array could be used to capture the audio data and, upon identifying that a portion of the audio data matches a predefined profile for user speech, the camera controller component 230 could use the data collected from the microphone sensors in the microphone array to determine a direction from which the user speech originated.
  • the camera controller component 230 manipulates an orientation of a camera device (e.g., a pan and tilt camera sensor 220 ) based on the determined measures of activity, the determined direction from which at least a portion of the audio data originated, and a mapping structure describing a layout of the physical environment (block 525 ).
  • a camera device e.g., a pan and tilt camera sensor 220
  • the mapping structure could specify a location of physical elements within the physical environment, such as chairs in which the users may be seated, and the camera controller component 230 could use this mapping structure to more accurately orient the camera device in a direction of interest.
  • such a mapping structure could specify predefined areas of interest within the physical environment, such as the location of a whiteboard relative to a physical position of the camera devices.
  • the camera controller component 230 could manipulate the orientation of the camera device to face (and capture video data of) the predefined area of interest.
  • the video streaming component 240 receives the video data captured using the manipulated camera device and encodes the captured video data in a suitable format (block 530 ). The video streaming component 240 then facilitates the transmission of the encoded to video data to a remote video conferencing endpoint for display (block 535 ), and the method 500 ends. For example, the video streaming component 240 transmits the encoded video data (e.g., using a communications device 250 , such as a BlueTooth® transceiver or a wired connection) to a user device 125 , and a video streaming application on the user device 125 in turn transmits the encoded video data (e.g., using a network interface controller or wireless interface controller) to the remote video conferencing endpoint device for display.
  • a communications device 250 such as a BlueTooth® transceiver or a wired connection
  • a video streaming application on the user device 125 in turn transmits the encoded video data (e.g., using a network interface controller or wireless interface controller) to the remote video
  • FIG. 6 is a flow diagram illustrating a method for creating a video stream by extracting a portion of a higher resolution video stream, according to one embodiment described herein.
  • the method 600 begins at block 610 where a high-resolution camera captures video data of a physical environment.
  • the camera controller component 230 then performs a facial recognition analysis to identify a plurality of video conferencing participants within the frames of the captured video data (block 615 ).
  • the camera controller component 230 also determines a measure of motion for each identified participant across the frames of the video data (block 620 ). In doing so, the camera controller component 230 can determine a region of the frames that corresponds to each participant's mouth and can restrict the motion determination to this region. That is, the camera controller component 230 can determine which of the participants is currently speaking within the physical environment, and may ignore other forms of motion such as one of the participants nodding or scratching his head.
  • the camera controller component 230 selects one of the identified participants, based on the determined measures of motion (block 625 ). For example, the camera controller component 230 could select the participant whose movement most closely matches a predefined movement profile indicative of user speech. Of note, may not be the user with the most motion from frame to frame, as the camera controller component 230 may select the user who is determined to be speaking as opposed to a user who is simply moving throughout the physical environment (e.g., the user arriving late to the video conference and in the process of sitting down).
  • the camera controller component 230 Upon selecting one of the participants, the camera controller component 230 creates a second video stream having a lower resolution than the captured video stream, by extracting a portion of video data from each frame of the captured video stream that includes the selected participant (block 630 ). In doing so, the camera controller component 230 essentially creates a virtual camera focused on the selected participant.
  • the size of the extracted portion of video data corresponds to a desired resolution of the video stream for the video conference. For example, if the video conference is configured to use video data having a resolution of 1024 ⁇ 768 pixels, the camera controller component 230 could extract a 1024 ⁇ 768-sized portion from each frame of the captured video.
  • a video streaming component 240 facilitates the transfer of the created second video stream to a remote video conferencing site for display (block 635 ), and the method 600 ends.
  • the video streaming component 240 transmits the video data to a user device 125 , which in turn transmits the video data to the remote video conferencing site for display.
  • a single camera to be used for both the motion detection analysis and for the capture of the video data to be streamed to the remote video conferencing site.
  • multiple camera devices are provided to capture multiple video streams of the physical environment, and the generated video stream that is transmitted to the remote video conferencing device is made up of multiple captured video streams.
  • these camera devices as well as control logic for controlling the movement of the camera devices can be provided within a single enclosure.
  • FIG. 7 is a block diagram illustrating a video conferencing environment that includes a video conferencing apparatus, according to one embodiment described herein. As shown, the diagram includes a video conferencing environment 700 , which includes a video conferencing apparatus 720 , a whiteboard 714 , user participants 718 and a currently speaking user 716 .
  • the video conferencing apparatus 720 includes a wide angle camera device 115 having a field of view 722 , pan and tilt camera devices 120 A and 120 B having fields of view 724 and 726 , respectively, and a control device 130 .
  • the pan and tilt camera devices 120 A is positioned and/or oriented to view the whiteboard
  • the pan and tilt camera devices 120 B is positioned and/or oriented to view a different portion of the video conferencing environment 700
  • the wide angle camera device 115 is positioned and/or oriented to view at least a portion of the areas viewed by both of the pan and tilt camera devices 120 A and 120 B.
  • control device 130 can analyze the video data captured by the wide angle camera device 115 to determine how to manipulate the pan and tilt camera devices 120 A and 120 B. For purposes of the depicted example, assume that the control device 130 has analyzed the video stream from the wide angle camera device 115 and has determined that the user 716 is currently speaking, and based on this, has manipulated the orientation of the pan and tilt camera device 120 B so that the user 716 is substantially centered within the video frames captured by the pan and tilt camera device 120 B.
  • the control device 130 can generate a video stream for transmission to the remote video conference endpoint 140 .
  • the control device 130 could transmit video data captured using the pan and tilt camera device 120 B to the user device 125 , which then transmits the video data to the remote video conference endpoint 140 for display, using the network 135 .
  • the control device 130 is configured to generate a composite video stream that includes video data captured from all of the pan and tilt camera devices 120 A, 1208 and wide angle camera device 115 . The control device 130 can then provide the generated composite video stream to the user device 125 , for transmission to the video conference endpoint 140 using network 135 .
  • Interface 701 illustrates an example of a rendering of the output of a composite video stream, where portion 702 corresponds to video data captured using the wide angle camera device 115 , portion 704 corresponds to video data captured using pan and tilt camera device 120 A, and portion 706 corresponds to video data captured using pan and tilt camera device 120 B.
  • the portion 704 may be formed from a portion of the video data captured using the wide angle camera device 115 (e.g., a sub-region of portion 702 ), as similarly discussed above in conjunction with FIG. 3 .
  • Rendering of the composite video stream may be performed by simultaneously displaying each of the video data elements found in the composite video stream on a display device that is part of the video conference endpoint 140 .
  • the interface 701 is provided without limitation and for illustrative purposes only, and of course any number of different arrangements and combinations of video streams can be used, consistent with the present disclosure.
  • embodiments limit the data transmitted over the network 135 to only a single video stream from the environment 700 to the video conference endpoint 140 . That is, while the interface 701 includes multiple captured video streams, only a single composite video stream is provided by the control device 130 to the user device 125 and is subsequently transmitted across the network 135 to the video conference endpoint 140 , thereby providing a more intelligent and sophisticated video conferencing system while reducing the needed computing and networking resources, relative to conventional techniques.
  • the illustration 800 includes auditorium seating sections 801 , 802 and 803 , and video conferencing enclosures 810 A-C.
  • Each of the video conferencing enclosures 810 A-C includes a respective pan and tilt camera sensor 120 and a respective wide angle camera sensor 115 . Additionally, each of the video conferencing enclosures 810 A-C may include respective control logic 130 . In one example, each of the video conferencing enclosures 810 A-C include a video conferencing apparatus 720 , which is discussed above.
  • the video conferencing enclosures 810 A and 810 C are connected by communication link 812 A (e.g., a wired communication link, a wireless communication link, etc.) and the video conferencing enclosures 810 B and 810 C are connected by communication link 812 B.
  • the video conferencing enclosure 810 C is configured to act in a master device role, while the video conferencing enclosures 810 A and 810 B are configured to act in a slave device role.
  • the video conferencing enclosures 810 A and 810 B are configured to provide video stream data captured using the respective camera sensors 115 and/or 120
  • the video conferencing enclosure 810 C is configured to generate a video stream to transmit to a remote video conferencing endpoint device, based on the received video streams.
  • the video conferencing enclosure 810 C can further generate the video stream based on video data captured using the camera sensors 115 and 120 on the video conferencing enclosure 810 C.
  • control logic for the video conferencing enclosure 810 C can analyze the received video data received from the other video conferencing enclosures, as well as the video data captured using the camera sensors 115 and 120 on the video conferencing enclosure 810 C, to determine which video stream(s) to include in the generated video stream.
  • such a generated video stream can be composed of video data captured using a single camera device or can be a composite video stream that includes video data captured using multiple camera devices (e.g., multiple camera devices within a single video conferencing apparatus, multiple camera devices from multiple video conferencing apparatuses, etc.).
  • the generated video stream is then transmitted to the user device 125 (e.g., a personal computing device executing a video streaming software application, such as Microsoft® Skype®).
  • the user device 125 Upon receiving the generated video stream, the user device 125 transmits the video data to the video conference endpoint device 140 (e.g., a remote user device that is also executing a respective video streaming software application) using the network 135 . Doing so allows the video conferencing techniques disclosed herein to be extended to any sized physical environment through the use of additional video conferencing enclosures.
  • aspects disclosed herein may be embodied as a system, method or computer program product. Accordingly, aspects may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium is any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Abstract

Embodiments provide techniques for facilitating transmission of a video stream from a first video conferencing device to a remote video conferencing device. Embodiments receive, by the first video conferencing endpoint device, first video data captured from a first field of view of a physical environment. The video data includes a plurality of frames. Activity data is determined for portions of the first video data across the plurality of frames. Embodiments generate, by a first video conferencing endpoint device, second video data from a second field of view of the physical environment, based on the determined activity data. Additionally, embodiments facilitate the transmission of the video stream to the remote video conferencing device for display, the video stream comprising the generated second video data and audio data captured within the physical environment.

Description

BACKGROUND
1. Field
The present invention relates to video cameras, and in particular to a smart video conferencing system that controls a video camera based on measures of activity within video data captured using another video camera.
2. Background
Video conferencing has become more popular in recent years, thanks in large part to proliferation of high speed Internet and price reductions in camera equipment. For example, dedicated video conferencing locations exist where rooms and technological resources are dedicated solely to the task of video conferencing. In particularly sophisticated environments that include multiple camera devices, server-side logic may be provided that is capable of dynamically switching between the video feeds of the various cameras when determining which video data to display at a remote video conferencing site. Additionally, many modern instant messaging software applications support voice and video chatting, where the participants can view each other while talking.
Generally, in capturing video data, video cameras are devices that can be configured to capture frames in a sequential manner using an image sensor. Additionally, a number of optimization operations can be performed on the captured frames in order to improve the quality of the video data. For instance, pixel correction operations can be performed on each captured frame, where bad pixel information is used to correct for hot or dead pixels. Additionally, auto focus operations can be performed, where a frame(s) is analyzed to determine whether the lens needs to be adjusted to achieve a more optimal focus. Upon determining that a lens adjustment is necessary, a feedback signal could be sent to motors or actuators to adjust the focal position of the lens. Additionally, color processing operations can be performed, where the frames are analyzed to determine if any color corrections are necessary. Such color corrections could include, for example, gamma correction, white balance correction and exposure correction.
Once any optimizations have been performed for the video data, the optimized video data can be encoded into a suitable format. Generally, the encoding format used can depend on the available network bandwidth and the application in question. For example, a dedicated video conference environment with a high bandwidth network connection could encode captured video data at a relatively high bit rate, while a video conference application on a mobile phone or tablet with a more limited network connection could encode the captured video data at a lower bit rate. The encoded video data can then be transmitted to the remote site for display via a communications network (e.g., the Internet).
While video conferencing technology is rapidly improving, it remains challenging to provided sophisticated video conferencing systems at relatively inexpensive prices. That is, while certain dedicated video conferencing environments provide many sophisticated features such as dynamically switching the displayed video stream between various captured video streams, such functionality currently comes at a substantial cost, in large part because such sophisticated setups require substantial computer hardware (e.g., multiple camera devices for capturing multiple different video feeds of the dedicated video conferencing environment, substantial network resources for transmitting multiple high-resolution video streams simultaneously, and server-side logic to select between the multiple different video feeds to determine what stream to display at the remote video conferencing site). As such, these sophisticated systems remain very expensive and priced beyond the practical reach of the average user.
Therefore, there is a need for a video conferencing system and method of using the same that solves the problems described above.
SUMMARY
One embodiment presented in this disclosure provides a method of facilitating transmission of a video stream from a first video conferencing device to a remote video conferencing device. The method includes receiving, by the first video conferencing endpoint device, first video data captured from a first field of view of a physical environment. The video data includes a plurality of frames. The method also includes determining activity data for portions of the first video data across the plurality of frames. Additionally, the method includes generating, by a first video conferencing endpoint device, second video data from a second field of view of the physical environment, based on the determined activity data. The method further includes facilitating transmission of the video stream to the remote video conferencing device for display, where the video stream includes the generated second video data and audio data captured within the physical environment.
Embodiments of the disclosure may further provide a method of generating a video stream for use in a video conference, comprising receiving first video data captured from a first field of view of a physical environment, the first video data comprising a plurality of frames and determining activity data from portions of the first video data using information provided in the plurality of frames. Then generating second video data from a second field of view of the physical environment, based on the determined activity data, and generating a video stream that comprises the generated second video data and audio data captured within the physical environment.
Another embodiment presented in this disclosure provides a system for facilitating transmission of a video stream from a first video conferencing device to a remote video conferencing device. The system includes a first camera sensor configured to capture first video data comprising a plurality of frames from a first field of view of a physical environment. Additionally, the system includes a second camera sensor and a mounting structure capable of adjusting an orientation of the second camera sensor along one or more degrees of freedom. The system further includes control logic configured to determine activity data for portions of the video data across the plurality of frames and control movement of the mounting structure to adjust the orientation of the second camera along the one or more degrees of freedom, based on the determined activity data. The system also includes video processing logic configured to capture video data from a second field of view of the physical environment using the second camera sensor, encode the captured video data, facilitate transmission of the video stream to the remote video conferencing device for display, the video stream comprising the generated second video data and audio data captured within the physical environment.
Embodiments of the disclosure may further provide a system for generating a video stream for use in a video conference. The system includes a first camera sensor configured to capture first video data comprising a plurality of frames from a first field of view of a physical environment. Additionally, the system includes a second camera sensor and a mounting structure capable of adjusting an orientation of the second camera sensor along one or more degrees of freedom. The system further includes control logic configured to determine activity data for portions of the video data across the plurality of frames and control movement of the mounting structure to adjust the orientation of the second camera along the one or more degrees of freedom, based on the determined activity data. The system also includes video processing logic configured to capture video data from a second field of view of the physical environment using the second camera sensor, encode the captured video data, and generate a video stream comprising the generated second video data and audio data captured within the physical environment.
Embodiments of the disclosure may further provide a system for facilitating transmission of a video stream from a first video conferencing device to a remote video conferencing device. The system includes a camera sensor configured to capture first video data comprising a plurality of frames from a first field of view of a physical environment at a first resolution. The system also includes control logic configured to determine activity data for portions of the first video data across the plurality of frames, determine a portion of the captured first video data to extract, based on the determined activity data and extract the portion of the captured video data to create second video data. In such an embodiment, the second video data has a second resolution that is less than the first resolution of the captured video data. The system further includes video processing logic configured to facilitate transmission of the video stream to the remote video conferencing device for display. The video stream includes the generated second video data and audio data captured within the physical environment.
Embodiments of the disclosure may further provide a system for generating a video stream for use in a video conference. The system includes a camera sensor configured to capture first video data comprising a plurality of frames from a first field of view of a physical environment at a first resolution. The system also includes control logic configured to determine activity data for portions of the first video data across the plurality of frames, determine a portion of the captured first video data to extract, based on the determined activity data and extract the portion of the captured video data to create second video data. In such an embodiment, the second video data has a second resolution that is less than the first resolution of the captured video data. The system further includes video processing logic configured to generate a video stream that includes the generated second video data and audio data captured within the physical environment.
BRIEF DESCRIPTION OF THE DRAWINGS
So that the manner in which the above-recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments.
FIG. 1 is a diagram illustrating a video conferencing system for two video conferencing endpoints, according to one embodiment described herein.
FIG. 2 is a block diagram illustrating a video conferencing system for use as an endpoint to a video conference, according to one embodiment described herein.
FIG. 3 illustrates extracting video stream data from within a higher resolution video stream, according to one embodiment described herein.
FIG. 4 is a flow diagram illustrating a method for controlling a camera to capture video data based on activity detected within video data captured by another camera, according to one embodiment described herein.
FIG. 5 is a flow diagram illustrating a method for controlling a camera based on supplemental video streams and detected audio data, according to one embodiment described herein.
FIG. 6 is a flow diagram illustrating a method for creating a video stream by extracting a portion of a higher resolution video stream, according to one embodiment described herein.
FIG. 7 is a diagram illustrating a video conferencing environment that includes a video conferencing apparatus, according to one embodiment described herein.
FIG. 8 is a diagram illustrating a video conferencing environment that includes multiple video conferencing devices operating in a master-slave relationship, according to one embodiment described herein.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation.
DETAILED DESCRIPTION
While much advancement has been made in video conferencing technology, it is still challenging to provide sophisticated video conferencing systems at relatively inexpensive prices. For instance, while dedicated video conferencing environments exist today that provide many sophisticated features such as multiple camera devices to capture multiple different video feeds of the dedicated video conferencing environment and server-side logic to select between the multiple different video feeds to determine what stream to display at the remote video conferencing site, such dedicated video conferencing systems remain very expensive and may be priced beyond the practical reach of the average user. Moreover, such sophisticated setups typically require much more network bandwidth, as multiple video streams are transmitted to an intermediary server that then selects between the multiple video streams.
On the other hand, inexpensive video conferencing systems exist but are predominantly limited to transmitting only a single video stream across the network. While this lowers the network requirements for using such video conferencing systems, it also limits what can be displayed at the remote video conferencing site to the single video stream. Thus, for video captured in a conference room environment, the remote site may simply see an overall view of the conference room environment, and any adjustments to the camera (e.g., pan, tilt and zoom operations) must be made manually by a user.
As such, embodiments provide a video conferencing endpoint system capable of manipulating a field of view of captured video content that is transmitted to a remote video conferencing endpoint device, based on measures of activity between frames of captured video content. For instance, embodiments may receive captured video data having a first field of view of a physical environment. Generally, the captured video data includes a sequential plurality of frames captured using a camera sensor. In one embodiment, the video data is captured using a wide angle camera sensor.
Embodiments may then analyze the captured video data to determine a plurality of measures of activity for portions of the video data across the plurality of frames. Generally, the measures of activity correspond to types of activities that are desirable to include in the video stream transmitted to the remote video conferencing site. For example, embodiments could analyze the captured video data to determine which of a plurality of users depicted within the video data is currently speaking. In doing so, embodiments could first perform a facial recognition analysis on the frames of the video data and could then determine a measure of motion for each detected face within the frames of the video data. Embodiments could then determine which measure of motion is most indicative of a user speaking.
Embodiments could then use the determined measures of activity to generate stream video data having a second field of view of the physical environment. For instance, embodiments could control the movement of a second camera device to capture stream video data that is substantially centered on a physical entity within the physical environment, based on the determined measures of activity. As an example, logic for the video conferencing system could determine which user depicted within the captured video data is currently speaking and could control the movement of the second camera device (e.g., pan, tilt and zoom of the second camera device) to capture stream video data that is substantially centered on the user who is currently speaking. The stream video data could then be transmitted over a network to a remote video conferencing device for display. Doing so provides an intelligent video conferencing system that is capable of dynamically adjusting the video stream that is transmitted to the remote video conferencing device.
An example of such a system is shown in FIG. 1, which is a diagram illustrating a video conferencing system for two video conferencing endpoints, according to one embodiment described herein. As shown, the system 100 includes a video conferencing endpoint 110 and a video conferencing endpoint 140, interconnected via a network 135. The network 135 generally represents any data communications network suitable for the transmission of video and audio data (e.g., the Internet). In some configurations, each of the video conferencing endpoints 140 includes one or more display devices for at least displaying received video and audio data and video and audio capture devices (e.g., pan and tilt camera 120) for capturing video data to send to the other video conferencing endpoints 110, 140. For example, the video conferencing endpoint 140 could represent a video conferencing software application (e.g., Microsoft® Skype®) executing on a mobile computing device that transmits captured video and audio data across the network 135 for display at the video conferencing endpoint 110 and that displays video and audio data received from the video conferencing endpoint 110. As another example, the video conferencing endpoint 140 could represent a dedicated video conferencing environment in which multiple camera devices are permanently installed. More generally, the video conferencing endpoint 140 represents any device(s) that are suitable to participate in a video conference, or even just displaying the received video and audio data.
Of note, while numerous examples are provided herein involving capturing video stream data (e.g., using one or more camera devices) and transmitting the video stream data as part of a video conference, one of ordinary skill in the art will understand that such video stream data may include one or more captured audio streams as well (e.g., captured using one or more microphone devices within the physical environment). As such, while a particular example may be described as transmitting video data, a video stream and/or video stream data to a remote video conferencing site, it is explicitly contemplated that captured audio data can be transmitted as well using any suitable technique for representing and transmitting audio data.
As shown, the video conferencing endpoint 110 includes a wide angle camera device 115, a pan and tilt camera device 120, a control device 130, one or more microphones 131 and a user system 125. Generally, the wide angle camera device 115 is configured to capture a video stream of the physical environment, and preferably is positioned so that all users within the physical environment are depicted within the captured video stream. The pan and tilt camera device 120 is capable of adjusting the orientation of a camera sensor within the device 120, across multiple degrees of freedom. Additionally, the pan and tilt camera device 120 may be capable of zoom functionality, e.g., hardware zoom or software zoom. While the system 100, shown in FIG. 1, illustrates a configuration in which the wide angle camera device 115, the pan and tilt camera device 120 and the control device 130 are distinct and separate components, this configuration is not intended to limit the scope of the disclosure provided herein, since other configurations or groupings of these components are also envisioned. In one example, the system 100 may include a single enclosure, such as a video conferencing apparatus 720 (FIG. 7) discussed below, that includes the control device 130 and one or more camera devices (e.g., wide angle camera device 115 and one or more pan and tilt camera device 120). In another example, the system 100 may include a single enclosure that includes the control device 130, one or more microphones 131 and the wide angle camera device 115. In yet another example, the system 100 may include a single enclosure that includes the control device 130, one or more microphones 131, the wide angle camera device 115 and the tilt camera device 120. In some cases, it is desirable to position the camera devices so that they face the same direction (e.g., front facing) and are oriented so that their fields-of-view completely overlap.
Generally, the user device 125 represents any computing device capable of transmitting a video stream to a remote video conferencing device (e.g., video conferencing endpoint 140) over the network 135. Examples of the user device 125 include personal computing devices, tablet computing devices, mobile devices and so on. Moreover, the user device 125 can execute one or more video conferencing software applications capable of receiving video data from the control device 130 (e.g., captured using the pan and tile camera device 120) and transmitting the captured video data to the video conferencing endpoint 140, via the network 135. Examples of such video conferencing applications include, without limitation, Microsoft® Skype® and Apple® FaceTime®. More generally, however, any video conferencing application capable of receiving video data and transmitting the video data to a remote site can be used, consistent with the functionality described herein. Additionally, the user device 125 may display video data captured at and received from the remote video conferencing endpoint 140, e.g., on a native display device of the user device 125 or on a separate display device (not shown) within the physical environment.
The control device 130 is generally configured to adjust and orientation of the pan and tilt camera device 120, based on detected measures of activity within a video stream captured by the wide angle camera device 115. For example, the control device 130 could analyze the video stream captured by the wide angle camera device 115 to determine which user depicted within the video data is currently speaking. The control device 130 could then adjust the orientation of the pan and tilt camera device 120, so that the video stream captured by the pan and tilt camera device 120 is substantially centered on the determined user that is currently speaking within the physical environment. In one embodiment, the control device 130 can also adjust the zoom level of the pan and tilt camera device 120, so that the user who is currently speaking is depicted at a predetermined size within the video stream captured by the pan and tilt camera device 120. For example, the control device 130 could adjust the zoom level such that the user occupies 70% of the captured video frames.
The control device 130 could then transmit the video data captured using the pan and tilt camera 120 and the microphone 131 to the user device 125. Upon receiving the video data, the user device 125 could transmit the video data to the video conferencing endpoint 140 for display, using the network 135. Advantageously, the control device 130 provides intelligent camera control for video conferencing applications on the user device 125, without requiring the transmission of multiple video streams across the network 135.
FIG. 2 is a block diagram illustrating a video conferencing system for use as an endpoint to a video conference, according to one embodiment described herein. For example, the system 200 could be used as either the video conferencing endpoint 110 and/or 140 and may contain one or more of the related components shown in FIG. 1. As shown, the video conferencing system 200 includes a wide angle camera sensor 210, pan and tilt camera sensor 220, camera controller component 230, video streaming component 240, and microphone sensors 260. Generally, the wide angle camera sensor 210 represents a camera having a wide angle lens that is configured to capture video data of the physical environment. Preferably, such a wide angle camera sensor 210 is positioned so that all or substantially all users within the physical environment, and in front of the camera, are depicted within the captured video. The wide angle camera sensor 210 may have a field of view that has a viewing angle that is between about 70 degrees and about 180 degrees, such as a viewing angle of about 130 degrees. Additionally, the wide angle camera sensor 210 is capable of providing the captured video data to the camera controller component 230. As an example, the camera controller component 230 could use such data to control movement of the pan and tilt camera sensor 220, create the video stream to transmit to a remote video conferencing endpoint.
The pan and tilt camera sensor 220 represents a camera sensor affixed to a mounting structure that is capable of moving in one or more degrees of freedom. For example, the mounting structure may be capable of moving such that an orientation of the pan and tilt camera sensor 220 changes in the horizontal and/or vertical directions. Additionally, the pan and tilt camera sensor 220 can be capable of zoom functionality (e.g., hardware zoom, software zoom, etc.). The pan and tilt camera sensor 220 is capable of providing the captured video data to the camera controller component 230. As an example, the camera controller component 230 could use the data from the pan and tilt camera sensor 220 in creating the video stream to transmit to the remote video conferencing endpoint. The pan and tilt camera sensor 220 may have a field of view that has a viewing angle that is between about 5 degrees and about 80 degrees, such as a viewing angle of about 50 degrees.
The camera controller component 230 is generally configured to adjust the orientation of the pan and tilt camera sensor 220, based on detected measures of activity within video data captured by the wide angle camera sensor 210. For example, the camera controller component 230 could analyze the received video data captured using the wide angle camera sensor 210 to detect which user within the physical environment is currently speaking. The camera controller component 230 could then adjust the orientation of the pan and tilt camera sensor 220, such that the user currently speaking is substantially centered within video data captured using the pan and tilt camera sensor 220. As another example, the camera controller component 230 could analyze the video data captured using the wide angle camera sensor 210 and could determine that a particular user has moved to a predefined location within the physical environment (e.g., within a predefined distance of a whiteboard within the physical environment). The camera controller component 230 could then adjust the orientation of the pan and tilt camera sensor 220, such that the particular user is substantially centered or desirably positioned within the video data captured using the pan and tilt camera sensor 220.
Typically, the camera controller component 230 is a general use computing device that includes a processor 218, a memory unit 214, input/output (I/O) devices 216 and a communications device 250. The memory unit 214 is generally included to be representative of a random access memory. The memory unit 214 represents any memory sufficiently large to hold the necessary programs and data structures. Memory unit 214 could be one or a combination of memory devices, including Random Access Memory, nonvolatile or backup memory (e.g., programmable or Flash memories, read-only memories, etc.). In addition, memory unit 214 may be considered to include memory physically located elsewhere, for example, on another computer communicatively coupled to the camera controller device. Illustratively, the memory unit 214 includes an operating system 219, one or more software applications 215, stored media data 217 and a model structure 213. Generally, the model structure 213 describes attributes of the physical environment in which the video conferencing system 200 is located (e.g., a make-up and arrangement of physical entities within the physical environment). Examples of operating system 219 include UNIX, a version of the Microsoft Windows® operating system, and distributions of the Linux® operating system. Additional examples of operating system 219 include custom operating systems for gaming consoles, including the custom operating systems for systems such as the Nintendo DS® and Sony PSP®.
Processor 218 may be a hardware unit or combination of hardware units capable of executing software applications and processing data. In some configurations, the processor 218 includes a central processing unit (CPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), and/or a combination of such units. Generally, the processor 218 retrieves and executes programming instructions stored in the memory unit 214. Processor 218 is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, GPUs having multiple execution paths, and the like. For example, the processor 218 could execute the one or more software applications 215 and process the stored media data 217, which are each included within memory unit 214.
The video streaming component 240 is generally configured to facilitate the transfer of video data (e.g., video data captured using the pan and tilt camera sensor 220) to a remote video conferencing device for display (e.g., using a communications device 250 such as a network interface controller). For example, the video streaming component 240 can encode the captured video data in an encoding format and at a bit rate suitable for the video conferencing session and deliver the video data to the user device 125 via a wireless (e.g., WiFi, Bluetooth®, etc.) or wired connection (e.g., USB connection). The user device 125 could then transmit the video data to a remote video conferencing device using conventional communication devices and protocols (e.g., network interface card, Ethernet card, modem, wireless network hardware or other conventional computing device communication hardware). In some configurations, the video streaming component 240 provides the video data to a video streaming application on the user device 125, such as, without limitation, Microsoft® Skype® and Apple® FaceTime®, using communication device 250
The encoding format used by the video streaming component 240 could be determined based on the capabilities or specified preferences of the user device 125. One example of a suitable video encoding format is H.264 (and variants such as H.264/AVC, H.264 High Profile and H.264 SVC). More generally, however, any suitable encoding format can be used, consistent with the present disclosure. For example, a particular video streaming application on the user device 125 could be configured to transmit streaming video data in a particular encoding format, and thus the video streaming component 240 could be configured to encode the generated video data using the particular encoding format for delivery to the streaming application on the user device. Additionally, the remote video conferencing device may be capable of receiving the encoded format from the user device and output the received data on a display. In one embodiment, the encoding format can be specified by a user of the video conferencing system 200. Moreover, in some instances, the user device 125 (or a video streaming application executing on the user device) can transcode the received video data to another format, before transmitting the video data to the remote video conferencing endpoint device.
In one embodiment, the camera controller component 230 can adjust the orientation of the pan and tilt camera sensor 220 in order to intelligently capture video data within the physical environment and the camera controller component 230 can then transmit the captured video data to the video conferencing application on the user device (e.g., over a wireless network connection, over a wired connection, etc.). The video conferencing application could then perform any needed encoding operations on the received video data (e.g., transcoding the received video data from a first format to another format used by the video conferencing application) and could then transmit the video data to a remote video conferencing device for display. Advantageously, in such an embodiment the camera controller component 230 provides intelligent video conferencing services for existing video conferencing applications of the user device, thereby improving the performance of the video conferencing applications on the user device without any modifications to the video conferencing applications.
In one embodiment, the camera controller component 230 is configured to use the microphone sensors 260 in order to determine an orientation for the pan and tilt camera sensor 220. For instance, the microphone sensors 260 could represent a microphone array (e.g., two or more microphone sensor devices operating in tandem) and the camera controller component 230 could include logic to determine a direction from which particular audio content was received by the microphone sensors 260 within the microphone array. As an example, the camera controller component 230 could analyze audio data collected by the microphone sensors 260 within the microphone array and could determine that a portion of the audio data matches a predefined signature corresponding to user speech data. The camera controller component 230 could then determine a direction from which the user speech data originated, relative to a physical position of the microphone sensors 260. The camera controller component 230 could then use the determined direction to orient the pan and tilt camera sensor 220, such that the pan and tilt camera sensor 220 is oriented in substantially the determined direction from which the user speech data originated. By doing so, the camera controller component 230 could capture video data of the user who is currently speaking within the physical environment. As discussed above, in doing so, the camera controller component 230 can also alter a zoom level of the pan and tilt camera sensor 220, so that the user that is speaking is predominantly featured within the captured video data. For example, the camera controller component 230 could alter the zoom of the pan and tilt camera sensor 220 until the speaking user occupies a predefined portion of the captured frames of video data.
In one embodiment, the camera controller component 230 is configured to use a single high-resolution camera to capture a video stream that is at least partially transmitted to the remote video conferencing device. An example of this is shown in FIG. 3, which illustrates extracting a video data from within a higher resolution video stream, according to one embodiment described herein. As shown, the screenshot illustrates a plurality of users within a video conferencing environment. In the depicted example, the user 305 is currently speaking. As discussed above, the camera controller component 230 could perform an analysis of the captured video data to determine a physical entity (e.g., a particular user) to be the focus of the stream video data transmitted to the remote video conferencing device. For instance, the camera controller component 230 could determine that the frames should focus on the user 305 who is currently speaking within the physical environment.
Upon determining that the user 305 is currently speaking, the camera controller component 230 could determine a portion of the captured video data to extract as stream video data to be transmitted to the remote video conferencing device. For example, in the illustration 300, the camera controller component 230 has determined the region 310 around the user 305, and the controller component 230 could extract the region 310 from each frame of the captured video data to create a stream of video data. In doing so, the camera controller component 230 can dynamically adjust the region 310 for each frame of the captured video, so that the user 305 is substantially centered within each frame of the stream video data. Doing so allows a single camera to be used for both the detection of activity within the physical environment and the capture of video data to be streamed to the remote video conferencing device.
Generally, as each frame of the stream video data is extracted from a portion of a corresponding frame of the captured video data, the stream video data is a lower resolution video stream than the captured video data. For example, and without limitation, the captured video data could have a resolution of 3840×2160 pixels, while the stream video data extracted from the captured video data could have a resolution of 1024×768 pixels. Such an embodiment may be preferable, for instance, because many modern devices have the computing resources and network resources to support the transmission and display of video data with a resolution of 1024×768 pixels, while substantially fewer devices today may be capable of supporting (i.e., with sufficient computing and network resources) a video streaming having a resolution of 3840×2160 pixels. In one example, the stream video data is sent to the external endpoint from the user device 125 in a 1080p resolution (or other desirable resolution) using typical computing and network resources. Likewise, the stream video data can be sent to the user device 125 in a 1080p resolution (or other desirable resolution), via the video conferencing system (e.g., system 200), so that the user device 125 can then send the stream video data to the external endpoint. More generally, however, any suitable resolutions for the captured video data and the stream video data can be used, consistent with the present disclosure.
FIG. 4 is a flow diagram illustrating a method for controlling a camera to capture video data based on activity detected within video data captured by another camera, according to one embodiment described herein. As shown, the method 400 begins at block 410, where a wide-angle camera sensor captures a wide-angle video stream of a video conferencing environment. The camera controller component 230 analyzes the wide-angle video stream to detect a region of activity within the frames of the video stream (block 415). As discussed above, examples of such activity include (without limitation) a user currently speaking and a user moving to a predefined location within the physical environment (e.g., a whiteboard).
The camera controller component 230 then manipulates a second camera device (e.g., pan and tilt camera sensor 220) to capture a video stream corresponding to the detected region of activity (block 420). As an example, upon determining that a particular user is currently speaking based on an analysis of the captured wide-angle video stream, the camera controller component 230 could manipulate the position of the second camera device such that the second camera device is substantially oriented in the direction of the particular user to collect video data of the user. A video streaming component 240 can then receive the video data captured using the second camera device and can facilitate the transfer of the captured video data to a remote video conferencing endpoint for display (block 425), and the method 400 ends. For example, the video streaming component 240 could provide the captured video data to a streaming application on a user device 125, which in turn transmits the video data to the remote videoconferencing endpoint (e.g., via the Internet). Advantageously, doing so provides an intelligent video conferencing system that is capable of providing a video stream that selectively focuses on physical elements within the video conferencing environment (e.g., a user who is currently speaking), without requiring multiple, distinct video streams to be transmitted across the network from the video conferencing system (e.g., to an intermediary server system for processing), thereby reducing the computing and network resources used in providing video conferencing services.
FIG. 5 is a flow diagram illustrating a method for controlling a camera based on supplemental video streams and detected audio data, according to one embodiment described herein. As shown, the method 500 begins at block 510, where the camera controller component 230 analyzes one or more supplemental video streams to determine a respective one or more measures of activity for regions of the supplemental video streams. For example, one of the supplemental video streams could be the video stream captured using a wide-angle camera sensor. In some environments, additional camera sensors may be provided, such as a camera sensor oriented substantially in the direction of a predefined area of interest (e.g., a whiteboard within the video conferencing environment). The camera controller component 230 can analyze the frames of each of these video streams in order to determine an orientation for a pan and tilt camera device that captures video data to be transmitted to a remote video conferencing device.
Additionally, the camera controller component 230 collects audio data using microphones sensors within the physical environment (block 515) and determines a direction from which at least a portion of the audio data originated (block 520). For instance, as discussed above, and microphone array could be used to capture the audio data and, upon identifying that a portion of the audio data matches a predefined profile for user speech, the camera controller component 230 could use the data collected from the microphone sensors in the microphone array to determine a direction from which the user speech originated.
The camera controller component 230 then manipulates an orientation of a camera device (e.g., a pan and tilt camera sensor 220) based on the determined measures of activity, the determined direction from which at least a portion of the audio data originated, and a mapping structure describing a layout of the physical environment (block 525). For example, the mapping structure could specify a location of physical elements within the physical environment, such as chairs in which the users may be seated, and the camera controller component 230 could use this mapping structure to more accurately orient the camera device in a direction of interest. Additionally, such a mapping structure could specify predefined areas of interest within the physical environment, such as the location of a whiteboard relative to a physical position of the camera devices. If the camera controller component 230 then detects activity within frames of the supplemental video stream(s) that is indicative of a user moving to the predefined area of interest within the physical environment, the camera controller component 230 could manipulate the orientation of the camera device to face (and capture video data of) the predefined area of interest.
The video streaming component 240 receives the video data captured using the manipulated camera device and encodes the captured video data in a suitable format (block 530). The video streaming component 240 then facilitates the transmission of the encoded to video data to a remote video conferencing endpoint for display (block 535), and the method 500 ends. For example, the video streaming component 240 transmits the encoded video data (e.g., using a communications device 250, such as a BlueTooth® transceiver or a wired connection) to a user device 125, and a video streaming application on the user device 125 in turn transmits the encoded video data (e.g., using a network interface controller or wireless interface controller) to the remote video conferencing endpoint device for display.
FIG. 6 is a flow diagram illustrating a method for creating a video stream by extracting a portion of a higher resolution video stream, according to one embodiment described herein. As shown, the method 600 begins at block 610 where a high-resolution camera captures video data of a physical environment. The camera controller component 230 then performs a facial recognition analysis to identify a plurality of video conferencing participants within the frames of the captured video data (block 615). The camera controller component 230 also determines a measure of motion for each identified participant across the frames of the video data (block 620). In doing so, the camera controller component 230 can determine a region of the frames that corresponds to each participant's mouth and can restrict the motion determination to this region. That is, the camera controller component 230 can determine which of the participants is currently speaking within the physical environment, and may ignore other forms of motion such as one of the participants nodding or scratching his head.
The camera controller component 230 then selects one of the identified participants, based on the determined measures of motion (block 625). For example, the camera controller component 230 could select the participant whose movement most closely matches a predefined movement profile indicative of user speech. Of note, may not be the user with the most motion from frame to frame, as the camera controller component 230 may select the user who is determined to be speaking as opposed to a user who is simply moving throughout the physical environment (e.g., the user arriving late to the video conference and in the process of sitting down).
Upon selecting one of the participants, the camera controller component 230 creates a second video stream having a lower resolution than the captured video stream, by extracting a portion of video data from each frame of the captured video stream that includes the selected participant (block 630). In doing so, the camera controller component 230 essentially creates a virtual camera focused on the selected participant. Generally, the size of the extracted portion of video data corresponds to a desired resolution of the video stream for the video conference. For example, if the video conference is configured to use video data having a resolution of 1024×768 pixels, the camera controller component 230 could extract a 1024×768-sized portion from each frame of the captured video. A video streaming component 240 facilitates the transfer of the created second video stream to a remote video conferencing site for display (block 635), and the method 600 ends. In one configuration, as discussed above, the video streaming component 240 transmits the video data to a user device 125, which in turn transmits the video data to the remote video conferencing site for display. Advantageously, doing so allows a single camera to be used for both the motion detection analysis and for the capture of the video data to be streamed to the remote video conferencing site.
In one embodiment, multiple camera devices are provided to capture multiple video streams of the physical environment, and the generated video stream that is transmitted to the remote video conferencing device is made up of multiple captured video streams. Moreover, these camera devices as well as control logic for controlling the movement of the camera devices can be provided within a single enclosure. An example of this is shown in FIG. 7, which is a block diagram illustrating a video conferencing environment that includes a video conferencing apparatus, according to one embodiment described herein. As shown, the diagram includes a video conferencing environment 700, which includes a video conferencing apparatus 720, a whiteboard 714, user participants 718 and a currently speaking user 716. The video conferencing apparatus 720 includes a wide angle camera device 115 having a field of view 722, pan and tilt camera devices 120A and 120B having fields of view 724 and 726, respectively, and a control device 130. In one configuration, as shown in FIG. 7, the pan and tilt camera devices 120A is positioned and/or oriented to view the whiteboard, the pan and tilt camera devices 120B is positioned and/or oriented to view a different portion of the video conferencing environment 700 and the wide angle camera device 115 is positioned and/or oriented to view at least a portion of the areas viewed by both of the pan and tilt camera devices 120A and 120B.
As discussed above, the control device 130 can analyze the video data captured by the wide angle camera device 115 to determine how to manipulate the pan and tilt camera devices 120A and 120B. For purposes of the depicted example, assume that the control device 130 has analyzed the video stream from the wide angle camera device 115 and has determined that the user 716 is currently speaking, and based on this, has manipulated the orientation of the pan and tilt camera device 120B so that the user 716 is substantially centered within the video frames captured by the pan and tilt camera device 120B.
The control device 130 can generate a video stream for transmission to the remote video conference endpoint 140. As discussed above, the control device 130 could transmit video data captured using the pan and tilt camera device 120B to the user device 125, which then transmits the video data to the remote video conference endpoint 140 for display, using the network 135. In the depicted embodiment, the control device 130 is configured to generate a composite video stream that includes video data captured from all of the pan and tilt camera devices 120A, 1208 and wide angle camera device 115. The control device 130 can then provide the generated composite video stream to the user device 125, for transmission to the video conference endpoint 140 using network 135.
Interface 701 illustrates an example of a rendering of the output of a composite video stream, where portion 702 corresponds to video data captured using the wide angle camera device 115, portion 704 corresponds to video data captured using pan and tilt camera device 120A, and portion 706 corresponds to video data captured using pan and tilt camera device 120B. However, in some configurations of the interface 701, the portion 704 may be formed from a portion of the video data captured using the wide angle camera device 115 (e.g., a sub-region of portion 702), as similarly discussed above in conjunction with FIG. 3. Rendering of the composite video stream may be performed by simultaneously displaying each of the video data elements found in the composite video stream on a display device that is part of the video conference endpoint 140. Of course, the interface 701 is provided without limitation and for illustrative purposes only, and of course any number of different arrangements and combinations of video streams can be used, consistent with the present disclosure. Advantageously, by generating the video stream at the control device 130, embodiments limit the data transmitted over the network 135 to only a single video stream from the environment 700 to the video conference endpoint 140. That is, while the interface 701 includes multiple captured video streams, only a single composite video stream is provided by the control device 130 to the user device 125 and is subsequently transmitted across the network 135 to the video conference endpoint 140, thereby providing a more intelligent and sophisticated video conferencing system while reducing the needed computing and networking resources, relative to conventional techniques.
In addition to a single enclosure that includes multiple camera devices (e.g., the enclosure 720 that includes camera devices 115, 120A and 120B and in some configurations a microphone (not shown)), in one embodiment multiple video conferencing enclosures are configured to operate in tandem in a video conferencing environment to provide a single video stream to a remote endpoint device. Such an embodiment may be preferable, for instance, when capturing video streams of a large physical environment such as an auditorium, where a single camera device may not have sufficient field of view to capture video data that includes all of the participants to the video conference. An example of this is shown in FIG. 8, which is a diagram illustrating a video conferencing environment that includes multiple video conferencing devices operating in a master-slave relationship, according to one embodiment described herein. As shown, the illustration 800 includes auditorium seating sections 801, 802 and 803, and video conferencing enclosures 810A-C. Each of the video conferencing enclosures 810A-C includes a respective pan and tilt camera sensor 120 and a respective wide angle camera sensor 115. Additionally, each of the video conferencing enclosures 810A-C may include respective control logic 130. In one example, each of the video conferencing enclosures 810A-C include a video conferencing apparatus 720, which is discussed above.
As shown, the video conferencing enclosures 810A and 810C are connected by communication link 812A (e.g., a wired communication link, a wireless communication link, etc.) and the video conferencing enclosures 810B and 810C are connected by communication link 812B. In the depicted embodiment, the video conferencing enclosure 810C is configured to act in a master device role, while the video conferencing enclosures 810A and 810B are configured to act in a slave device role. That is, the video conferencing enclosures 810A and 810B are configured to provide video stream data captured using the respective camera sensors 115 and/or 120, and the video conferencing enclosure 810C is configured to generate a video stream to transmit to a remote video conferencing endpoint device, based on the received video streams. Additionally, the video conferencing enclosure 810C can further generate the video stream based on video data captured using the camera sensors 115 and 120 on the video conferencing enclosure 810C. For example, control logic for the video conferencing enclosure 810C can analyze the received video data received from the other video conferencing enclosures, as well as the video data captured using the camera sensors 115 and 120 on the video conferencing enclosure 810C, to determine which video stream(s) to include in the generated video stream. As discussed above, such a generated video stream can be composed of video data captured using a single camera device or can be a composite video stream that includes video data captured using multiple camera devices (e.g., multiple camera devices within a single video conferencing apparatus, multiple camera devices from multiple video conferencing apparatuses, etc.).
The generated video stream is then transmitted to the user device 125 (e.g., a personal computing device executing a video streaming software application, such as Microsoft® Skype®). Upon receiving the generated video stream, the user device 125 transmits the video data to the video conference endpoint device 140 (e.g., a remote user device that is also executing a respective video streaming software application) using the network 135. Doing so allows the video conferencing techniques disclosed herein to be extended to any sized physical environment through the use of additional video conferencing enclosures.
In the preceding, reference is made to embodiments presented in this disclosure. However, the scope of the present disclosure is not limited to specific described embodiments. Instead, any combination of the described features and elements, whether related to different embodiments or not, is contemplated to implement and practice contemplated embodiments. Furthermore, although embodiments disclosed herein may achieve advantages over other possible solutions or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the scope of the present disclosure. Thus, the preceding aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s).
As will be appreciated by one skilled in the art, the embodiments disclosed herein may be embodied as a system, method or computer program product. Accordingly, aspects may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium is any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments presented in this disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality and operation of possible implementations of systems, methods and computer program products according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In view of the foregoing, the scope of the present disclosure is determined by the claims that follow.

Claims (29)

I claim:
1. A method of generating a video stream for use in a video conference, comprising:
receiving, by a first video conferencing endpoint device, first video data captured from a first field of view of a physical environment, the first video data comprising a plurality of frames;
determining activity data from portions of the first video data using information provided in the plurality of frames;
generating, by the first video conferencing endpoint device, second video data from a second field of view of the physical environment, based on the determined activity data;
generating a video stream that comprises the generated second video data and audio data captured within the physical environment; and
transmitting the video stream to a video conferencing application executing on a user device, wherein the video conferencing application is configured to process the video stream as an input video stream to facilitate the transmission of the video stream to a remote video conferencing device for display.
2. The method of claim 1, wherein generating the second video data from the second field of view of the physical environment further comprises controlling movement of a controlled camera device to capture the second video data, and wherein the received first video data is captured using a wide angle camera device, which is distinct from the controlled camera device.
3. The method of claim 2, wherein determining the activity data for portions of the first video data using information provided in the plurality of frames further comprises:
performing a facial detection analysis to detect a plurality of user faces within the first video data; and
determining a measure of motion for each of the detected plurality of user faces using information provided in the plurality of frames of the first video data.
4. The method of claim 3, wherein generating the second video data from the second field of view of the physical environment further comprises:
selecting one of the plurality of user faces having a corresponding determined measure of motion that is indicative of user speech; and
determining an orientation of the camera device for capturing video data substantially centered on the selected user face, and
wherein controlling the movement of the camera device to capture the second video data further comprises controlling the movement of the camera device to match the determined orientation.
5. The method of claim 4, wherein determining an orientation of the camera device for capturing the video data substantially centered on the selected user face further comprises:
identifying a physical entity corresponding to the selected user face by accessing a model structure describing an attribute of the physical environment; and
determining a direction of the identified physical entity, relative to a physical position of the camera device within the physical environment.
6. The method of claim 4, wherein generating the second video data from the second field of view of the physical environment further comprises:
collecting audio data from the physical environment using two or more microphone sensors;
identifying user speech within the collected audio data; and
determining a direction from which the identified user speech originates, relative to a physical position of the two or more microphone sensors, and
wherein determining the orientation of the camera device for capturing the video data substantially centered on the selected user face is further based on the determined direction from which the identified user speech originates.
7. The method of claim 1, wherein the received first video data is captured at a first resolution, and wherein generating the second video data from the second field of view of the physical environment further comprises:
extracting a portion of the first video data to create the second video data, wherein the second video data has a second resolution that is less than the first resolution of the first video data.
8. The method of claim 1, wherein the generated video stream further comprises the first video data, and the generated video stream is configured to allow a remote video conferencing device to simultaneously display the first video data and second video data.
9. A system for generating a video stream for use in a video conference, comprising:
a first camera sensor configured to capture first video data comprising a plurality of frames from a first field of view of a physical environment;
a second camera sensor;
a mounting structure capable of adjusting an orientation of the second camera sensor along one or more degrees of freedom;
control logic configured to:
determine activity data for portions of the first video data across the plurality of frames; and
control movement of the mounting structure to adjust the orientation of the second camera along the one or more degrees of freedom, based on the determined activity data; and
video processing logic configured to:
capture second video data from a second field of view of the physical environment using the second camera sensor;
encode the captured second video data;
generate a video stream comprising the captured second video data and audio data captured within the physical environment; and
transmit the video stream to a video conferencing application executing on a user device, wherein the video conferencing application is configured to process the video stream as an input video stream to facilitate the transmission of the video stream to a remote video conferencing device for display.
10. The system of claim 9, wherein the first camera sensor comprises a wide angle camera sensor.
11. The system of claim 10, wherein the control logic configured to determine the activity data for portions of the video data across the plurality of frames is further configured to:
perform a facial detection analysis to detect a plurality of user faces within the first video data; and
determine a measure of motion for each of the detected plurality of user faces across the plurality of frames of the first video data.
12. The system of claim 11, wherein the control logic configured to control movement of the mounting structure to adjust the orientation of the second camera along the one or more degrees of freedom, based on the determined plurality of measures of activity is further configured to:
select one of the plurality of user faces having a corresponding determined measure of motion that is indicative of user speech; and
determine an orientation for capturing video data substantially centered on the selected user face, and
wherein the movement of the mounting structure is controlled such that an orientation of the second camera sensor matches the determined orientation.
13. The system of claim 12, wherein the control logic configured to determine the orientation for capturing the video data substantially centered on the selected user face is further configured to:
identify a physical entity, within the physical environment, corresponding to the selected user face by accessing a model structure describing an attribute of the physical environment; and
determine a direction of the identified physical entity, relative to a physical position of the second camera sensor within the physical environment.
14. The system of claim 12, wherein the system further comprises two or more microphone sensors, and wherein the control logic configured to determine the orientation for capturing the video data substantially centered on the selected user face is further configured to:
collect audio data from the physical environment using the two or more microphone sensors;
identify user speech within the collected audio data; and
determine a direction from which the identified user speech originates, relative to a physical position of the two or more microphone sensors within the physical environment, and
wherein the logic configured to determine the orientation for capturing video data substantially centered on the selected user face operates further based on the determined direction from which the identified user speech originates.
15. The system of claim 12, wherein the generated video stream further comprises the first video data, and the generated video stream is configured to allow a remote video conferencing device to simultaneously display the first video data and second video data.
16. A system for generating a video stream for use in a video conference, comprising:
a camera sensor configured to capture first video data comprising a plurality of frames from a first field of view of a physical environment at a first resolution;
control logic configured to:
determine activity data for portions of the first video data across the plurality of frames;
define a portion of the captured first video data to extract, based on the determined activity data; and
extract the portion of the captured video data to create second video data, the second video data having less than all of a plurality of pixels of the captured video data; and
video processing logic configured to:
generate a video stream that comprises the second video data and audio data captured within the physical environment; and
transmit the video stream to a video conferencing application executing on a user device, wherein the video conferencing application is configured to process the video stream as an input video stream to facilitate the transmission of the video stream to a remote video conferencing device for display.
17. The system of claim 16, wherein the control logic configured to determine the activity data for portions of the first video data across the plurality of frames is further configured to:
perform a facial detection analysis to detect a plurality of user faces within the captured first video data; and
determine a measure of motion for each of the detected plurality of user faces across the plurality of frames of the first video data.
18. The system of claim 17, wherein the control logic to determine the portion of the first video data to extract, based on the determined activity data, is further configured to:
select one of the plurality of user faces having a corresponding determined measure of motion that is indicative of user speech; and
determine the portion of the captured video to extract, such that the second video data is substantially centered on the selected user face.
19. The system of claim 16, further comprising two or more microphone sensors, and wherein the control logic configured to determine the portion of the captured first video data to extract, based on the determined activity data, is further configured to:
collecting audio data from the physical environment using the two or more microphone sensors;
identifying user speech within the collected audio data; and
determining a direction from which the identified user speech originates, relative to a physical position of the two or more microphone sensors.
20. The system of claim 19, wherein the control logic configured to determine the portion of the first video data to extract, based on the determined activity data, is further configured to:
identify a physical entity, within the physical environment, located in the determined direction from which the user speech originates, by accessing a mapping structure describing an attribute of the physical environment; and
determine a visual representation of the identified physical entity within the plurality of frames of the first video data.
21. The system of claim 16, wherein the video stream further comprises the first video data, and the transmitted video stream is configured to allow the remote video conferencing device to simultaneously display the first video data and second video data.
22. The method of claim 1, wherein the received first video data is captured at a first resolution, and wherein generating the second video data from the second field of view of the physical environment further comprises:
extracting a portion of the first video data to create the second video data, wherein the second video data has less than all of a plurality of pixels of the first video data.
23. A non-transitory computer-readable medium containing computer program code that, when executed by operation of one or more computer processors, performs an operation for generating a video stream for use in a video conference, the operation comprising:
receiving, by a first video conferencing endpoint device, first video data captured from a first field of view of a physical environment, the first video data comprising a plurality of frames;
determining activity data from portions of the first video data using information provided in the plurality of frames;
generating, by the first video conferencing endpoint device, second video data from a second field of view of the physical environment, based on the determined activity data;
generating a video stream that comprises the generated second video data and audio data captured within the physical environment; and
transmitting the video stream to a video conferencing application executing on a user device, wherein the video conferencing application is configured to process the video stream as an input video stream to facilitate the transmission of the video stream to a remote video conferencing device for display.
24. The non-transitory computer-readable medium of claim 23, wherein generating the second video data from the second field of view of the physical environment further comprises controlling movement of a controlled camera device to capture the second video data, and wherein the received first video data is captured using a wide angle camera device, which is distinct from the controlled camera device.
25. The non-transitory computer-readable medium of claim 24, wherein determining the activity data for portions of the first video data using information provided in the plurality of frames further comprises:
performing a facial detection analysis to detect a plurality of user faces within the first video data; and
determining a measure of motion for each of the detected plurality of user faces using information provided in the plurality of frames of the first video data.
26. The non-transitory computer-readable medium of claim 25, wherein generating the second video data from the second field of view of the physical environment further comprises:
selecting one of the plurality of user faces having a corresponding determined measure of motion that is indicative of user speech; and
determining an orientation of the camera device for capturing video data substantially centered on the selected user face, and
wherein controlling the movement of the camera device to capture the second video data further comprises controlling the movement of the camera device to match the determined orientation.
27. The non-transitory computer-readable medium of claim 26, wherein determining an orientation of the camera device for capturing the video data substantially centered on the selected user face further comprises:
identifying a physical entity corresponding to the selected user face by accessing a model structure describing an attribute of the physical environment; and
determining a direction of the identified physical entity, relative to a physical position of the camera device within the physical environment.
28. The non-transitory computer-readable medium of claim 26, wherein generating the second video data from the second field of view of the physical environment further comprises:
collecting audio data from the physical environment using two or more microphone sensors;
identifying user speech within the collected audio data; and
determining a direction from which the identified user speech originates, relative to a physical position of the two or more microphone sensors, and
wherein determining the orientation of the camera device for capturing the video data substantially centered on the selected user face is further based on the determined direction from which the identified user speech originates.
29. The non-transitory computer-readable medium of claim 23, wherein the received first video data is captured at a first resolution, and wherein generating the second video data from the second field of view of the physical environment further comprises:
extracting a portion of the first video data to create the second video data, wherein the second video data has a second resolution that is less than the first resolution of the first video data.
US14/659,373 2015-03-16 2015-03-16 Smart video conferencing system Active US9270941B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/659,373 US9270941B1 (en) 2015-03-16 2015-03-16 Smart video conferencing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/659,373 US9270941B1 (en) 2015-03-16 2015-03-16 Smart video conferencing system

Publications (1)

Publication Number Publication Date
US9270941B1 true US9270941B1 (en) 2016-02-23

Family

ID=55314823

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/659,373 Active US9270941B1 (en) 2015-03-16 2015-03-16 Smart video conferencing system

Country Status (1)

Country Link
US (1) US9270941B1 (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170099459A1 (en) * 2015-10-05 2017-04-06 Polycom, Inc. Conversational placement of speakers at one endpoint
US9800832B1 (en) 2016-05-26 2017-10-24 Logitech Europe, S.A. Method and apparatus for facilitating setup, discovery of capabilities and interaction of electronic devices
US9798933B1 (en) 2016-12-12 2017-10-24 Logitech Europe, S.A. Video conferencing system and related methods
US20180091727A1 (en) * 2016-09-28 2018-03-29 Facebook, Inc. Systems and methods for gesture-based control of equipment in video communication
WO2018138367A1 (en) * 2017-01-27 2018-08-02 Lane Gmbh Method and system for transmitting alternative image content of a physical display to different viewers
US10115396B2 (en) 2017-01-03 2018-10-30 Logitech Europe, S.A. Content streaming system
US20190253667A1 (en) * 2015-08-14 2019-08-15 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
US10587810B2 (en) * 2015-03-09 2020-03-10 Apple Inc. Automatic cropping of video content
US10606551B2 (en) 2018-07-20 2020-03-31 Logitech Europe S.A. Content streaming apparatus and method
US10637933B2 (en) 2016-05-26 2020-04-28 Logitech Europe S.A. Method and apparatus for transferring information between electronic devices
US20200287735A1 (en) * 2019-03-08 2020-09-10 Yealink (Xiamen) Network Technology Co., Ltd. Secondary-stream data transmission method and display method for use in conference, conference system, and peripheral device
CN111953658A (en) * 2020-07-20 2020-11-17 广州灏博信息技术有限公司 Paperless intelligent conference management system and method
US10841537B2 (en) 2017-06-09 2020-11-17 Pcms Holdings, Inc. Spatially faithful telepresence supporting varying geometries and moving users
US10904446B1 (en) 2020-03-30 2021-01-26 Logitech Europe S.A. Advanced video conferencing systems and methods
US10951858B1 (en) 2020-03-30 2021-03-16 Logitech Europe S.A. Advanced video conferencing systems and methods
US10965908B1 (en) 2020-03-30 2021-03-30 Logitech Europe S.A. Advanced video conferencing systems and methods
US10972655B1 (en) 2020-03-30 2021-04-06 Logitech Europe S.A. Advanced video conferencing systems and methods
US20210117681A1 (en) 2019-10-18 2021-04-22 Facebook, Inc. Multimodal Dialog State Tracking and Action Prediction for Assistant Systems
WO2021086729A1 (en) * 2019-11-01 2021-05-06 Microsoft Technology Licensing, Llc Automatic detection of presentation surface and generation of associated data stream
US11038704B2 (en) 2019-08-16 2021-06-15 Logitech Europe S.A. Video conference system
US11088861B2 (en) 2019-08-16 2021-08-10 Logitech Europe S.A. Video conference system
US11095467B2 (en) 2019-08-16 2021-08-17 Logitech Europe S.A. Video conference system
US11258982B2 (en) 2019-08-16 2022-02-22 Logitech Europe S.A. Video conference system
US20220109822A1 (en) * 2020-10-02 2022-04-07 Facebook Technologies, Llc Multi-sensor camera systems, devices, and methods for providing image pan, tilt, and zoom functionality
US11350029B1 (en) 2021-03-29 2022-05-31 Logitech Europe S.A. Apparatus and method of detecting and displaying video conferencing groups
US20220247824A1 (en) * 2021-01-30 2022-08-04 Zoom Video Communications, Inc. Intelligent configuration of personal endpoint devices
US11418559B2 (en) 2020-09-21 2022-08-16 Logitech Europe S.A. Content distribution system
US11445457B2 (en) 2020-09-21 2022-09-13 Logitech Europe S.A. Content distribution system
US11488364B2 (en) 2016-04-01 2022-11-01 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
US11562638B2 (en) 2020-08-24 2023-01-24 Logitech Europe S.A. Electronic system and method for improving human interaction and activities
US11567788B1 (en) 2019-10-18 2023-01-31 Meta Platforms, Inc. Generating proactive reminders for assistant systems
US11611690B2 (en) * 2017-08-15 2023-03-21 American Well Corporation Methods and apparatus for remote camera control with intention based controls and machine learning vision state management
EP4228250A1 (en) * 2021-10-14 2023-08-16 COCOSOFT Systems GmbH Method for controlling a video conference system of a business media system
US20230269468A1 (en) * 2022-02-23 2023-08-24 Huddly As Autonomous video conferencing system with virtual director assistance

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5434617A (en) * 1993-01-29 1995-07-18 Bell Communications Research, Inc. Automatic tracking camera control system
US6392694B1 (en) * 1998-11-03 2002-05-21 Telcordia Technologies, Inc. Method and apparatus for an automatic camera selection system
US6611281B2 (en) * 2001-11-13 2003-08-26 Koninklijke Philips Electronics N.V. System and method for providing an awareness of remote people in the room during a videoconference
US20040003409A1 (en) * 2002-06-27 2004-01-01 International Business Machines Corporation Rendering system and method for images having differing foveal area and peripheral view area resolutions
US6731334B1 (en) * 1995-07-31 2004-05-04 Forgent Networks, Inc. Automatic voice tracking camera system and method of operation
US6829391B2 (en) * 2000-09-08 2004-12-07 Siemens Corporate Research, Inc. Adaptive resolution system and method for providing efficient low bit rate transmission of image data for distributed applications
US7349008B2 (en) * 2002-11-30 2008-03-25 Microsoft Corporation Automated camera management system and method for capturing presentations using videography rules
US7433327B2 (en) * 2003-10-09 2008-10-07 Hewlett-Packard Development Company, L.P. Method and system for coordinating communication devices to create an enhanced representation of an ongoing event
US20110128350A1 (en) * 2009-11-30 2011-06-02 Motorola, Inc. Method and apparatus for choosing a desired field of view from a wide-angle image or video
US8094193B2 (en) * 2005-10-12 2012-01-10 New Vad, Llc Presentation video control system
US8284254B2 (en) * 2005-08-11 2012-10-09 Sightlogix, Inc. Methods and apparatus for a wide area coordinated surveillance system
US8358328B2 (en) * 2008-11-20 2013-01-22 Cisco Technology, Inc. Multiple video camera processing for teleconferencing
US8471889B1 (en) * 2010-03-11 2013-06-25 Sprint Communications Company L.P. Adjusting an image for video conference display
US20130335508A1 (en) * 2012-06-15 2013-12-19 Cisco Technology, Inc. Adaptive Switching of Views for a Video Conference that Involves a Presentation Apparatus
US20140111600A1 (en) * 2012-10-24 2014-04-24 Stephen P. SCHAEFER Automatic Positioning of Videoconference Camera to Presenter at Presentation Device
US8780168B2 (en) 2011-12-16 2014-07-15 Logitech Europe S.A. Performing DMA transfer of audio and video data received over a serial bus
US8842161B2 (en) * 2010-05-18 2014-09-23 Polycom, Inc. Videoconferencing system having adjunct camera for auto-framing and tracking
US8872882B2 (en) 2012-02-29 2014-10-28 Logitech Europe S.A. Streaming a videoconference using distributed transcoding
US8885057B2 (en) 2011-12-16 2014-11-11 Logitech Europe S.A. Performing camera control using a remote control device
US8913103B1 (en) * 2012-02-01 2014-12-16 Google Inc. Method and apparatus for focus-of-attention control
US20150022636A1 (en) * 2013-07-19 2015-01-22 Nvidia Corporation Method and system for voice capture using face detection in noisy environments

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5434617A (en) * 1993-01-29 1995-07-18 Bell Communications Research, Inc. Automatic tracking camera control system
US6731334B1 (en) * 1995-07-31 2004-05-04 Forgent Networks, Inc. Automatic voice tracking camera system and method of operation
US6392694B1 (en) * 1998-11-03 2002-05-21 Telcordia Technologies, Inc. Method and apparatus for an automatic camera selection system
US6829391B2 (en) * 2000-09-08 2004-12-07 Siemens Corporate Research, Inc. Adaptive resolution system and method for providing efficient low bit rate transmission of image data for distributed applications
US6611281B2 (en) * 2001-11-13 2003-08-26 Koninklijke Philips Electronics N.V. System and method for providing an awareness of remote people in the room during a videoconference
US20040003409A1 (en) * 2002-06-27 2004-01-01 International Business Machines Corporation Rendering system and method for images having differing foveal area and peripheral view area resolutions
US7349008B2 (en) * 2002-11-30 2008-03-25 Microsoft Corporation Automated camera management system and method for capturing presentations using videography rules
US7433327B2 (en) * 2003-10-09 2008-10-07 Hewlett-Packard Development Company, L.P. Method and system for coordinating communication devices to create an enhanced representation of an ongoing event
US8284254B2 (en) * 2005-08-11 2012-10-09 Sightlogix, Inc. Methods and apparatus for a wide area coordinated surveillance system
US8094193B2 (en) * 2005-10-12 2012-01-10 New Vad, Llc Presentation video control system
US8358328B2 (en) * 2008-11-20 2013-01-22 Cisco Technology, Inc. Multiple video camera processing for teleconferencing
US20110128350A1 (en) * 2009-11-30 2011-06-02 Motorola, Inc. Method and apparatus for choosing a desired field of view from a wide-angle image or video
US8471889B1 (en) * 2010-03-11 2013-06-25 Sprint Communications Company L.P. Adjusting an image for video conference display
US8842161B2 (en) * 2010-05-18 2014-09-23 Polycom, Inc. Videoconferencing system having adjunct camera for auto-framing and tracking
US8780168B2 (en) 2011-12-16 2014-07-15 Logitech Europe S.A. Performing DMA transfer of audio and video data received over a serial bus
US8885057B2 (en) 2011-12-16 2014-11-11 Logitech Europe S.A. Performing camera control using a remote control device
US8913103B1 (en) * 2012-02-01 2014-12-16 Google Inc. Method and apparatus for focus-of-attention control
US8872882B2 (en) 2012-02-29 2014-10-28 Logitech Europe S.A. Streaming a videoconference using distributed transcoding
US20130335508A1 (en) * 2012-06-15 2013-12-19 Cisco Technology, Inc. Adaptive Switching of Views for a Video Conference that Involves a Presentation Apparatus
US20140111600A1 (en) * 2012-10-24 2014-04-24 Stephen P. SCHAEFER Automatic Positioning of Videoconference Camera to Presenter at Presentation Device
US20150022636A1 (en) * 2013-07-19 2015-01-22 Nvidia Corporation Method and system for voice capture using face detection in noisy environments

Cited By (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10587810B2 (en) * 2015-03-09 2020-03-10 Apple Inc. Automatic cropping of video content
US11393067B2 (en) 2015-03-09 2022-07-19 Apple Inc. Automatic cropping of video content
US11010867B2 (en) 2015-03-09 2021-05-18 Apple Inc. Automatic cropping of video content
US11967039B2 (en) * 2015-03-09 2024-04-23 Apple Inc. Automatic cropping of video content
US11363240B2 (en) * 2015-08-14 2022-06-14 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
US11962940B2 (en) 2015-08-14 2024-04-16 Interdigital Vc Holdings, Inc. System and method for augmented reality multi-view telepresence
US10701318B2 (en) * 2015-08-14 2020-06-30 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
US20190253667A1 (en) * 2015-08-14 2019-08-15 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
US9800835B2 (en) * 2015-10-05 2017-10-24 Polycom, Inc. Conversational placement of speakers at one endpoint
US20170099459A1 (en) * 2015-10-05 2017-04-06 Polycom, Inc. Conversational placement of speakers at one endpoint
US10148913B2 (en) 2015-10-05 2018-12-04 Polycom, Inc. Conversational placement of speakers at one endpoint
US11488364B2 (en) 2016-04-01 2022-11-01 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
US11539799B2 (en) 2016-05-26 2022-12-27 Logitech Europe S.A. Method and apparatus for transferring information between electronic devices
US10637933B2 (en) 2016-05-26 2020-04-28 Logitech Europe S.A. Method and apparatus for transferring information between electronic devices
US9800832B1 (en) 2016-05-26 2017-10-24 Logitech Europe, S.A. Method and apparatus for facilitating setup, discovery of capabilities and interaction of electronic devices
US10116899B2 (en) 2016-05-26 2018-10-30 Logitech Europe, S.A. Method and apparatus for facilitating setup, discovery of capabilities and interaction of electronic devices
US9979880B2 (en) * 2016-09-28 2018-05-22 Facebook, Inc. Systems and methods for gesture-based control of equipment in video communication
US20180091727A1 (en) * 2016-09-28 2018-03-29 Facebook, Inc. Systems and methods for gesture-based control of equipment in video communication
US10650244B2 (en) 2016-12-12 2020-05-12 Logitech Europe S.A. Video conferencing system and related methods
US10360457B2 (en) 2016-12-12 2019-07-23 Logitech Europe S.A. Video conferencing system and related methods
US9798933B1 (en) 2016-12-12 2017-10-24 Logitech Europe, S.A. Video conferencing system and related methods
US10115396B2 (en) 2017-01-03 2018-10-30 Logitech Europe, S.A. Content streaming system
US10834443B2 (en) 2017-01-27 2020-11-10 Appario Global Solutions (AGS) AG Method and system for transmitting alternative image content of a physical display to different viewers
US11825137B2 (en) 2017-01-27 2023-11-21 Appario Global Solutions (AGS) AG Method and system for transmitting alternative image content of a physical display to different viewers
WO2018138367A1 (en) * 2017-01-27 2018-08-02 Lane Gmbh Method and system for transmitting alternative image content of a physical display to different viewers
US11457252B2 (en) * 2017-01-27 2022-09-27 Appario Global Solutions (AGS) AG Method and system for transmitting alternative image content of a physical display to different viewers
US10841537B2 (en) 2017-06-09 2020-11-17 Pcms Holdings, Inc. Spatially faithful telepresence supporting varying geometries and moving users
US20230300456A1 (en) * 2017-08-15 2023-09-21 American Well Corporation Methods and Apparatus for Remote Camera Control With Intention Based Controls and Machine Learning Vision State Management
US11611690B2 (en) * 2017-08-15 2023-03-21 American Well Corporation Methods and apparatus for remote camera control with intention based controls and machine learning vision state management
US10789038B2 (en) 2018-07-20 2020-09-29 Logitech Europe S.A. Content streaming apparatus and method
US10606551B2 (en) 2018-07-20 2020-03-31 Logitech Europe S.A. Content streaming apparatus and method
US10642573B2 (en) 2018-07-20 2020-05-05 Logitech Europe S.A. Content streaming apparatus and method
US11528156B2 (en) * 2019-03-08 2022-12-13 Yealink (Xiamen) Network Technology Co., Ltd. Secondary-stream data transmission method and display method for use in conference, conference system, and peripheral device
US20200287735A1 (en) * 2019-03-08 2020-09-10 Yealink (Xiamen) Network Technology Co., Ltd. Secondary-stream data transmission method and display method for use in conference, conference system, and peripheral device
US11038704B2 (en) 2019-08-16 2021-06-15 Logitech Europe S.A. Video conference system
US11088861B2 (en) 2019-08-16 2021-08-10 Logitech Europe S.A. Video conference system
US11095467B2 (en) 2019-08-16 2021-08-17 Logitech Europe S.A. Video conference system
US11258982B2 (en) 2019-08-16 2022-02-22 Logitech Europe S.A. Video conference system
US20220036013A1 (en) * 2019-10-18 2022-02-03 Facebook Technologies, Llc In-Call Experience Enhancement for Assistant Systems
US11636438B1 (en) 2019-10-18 2023-04-25 Meta Platforms Technologies, Llc Generating smart reminders by assistant systems
US11966986B2 (en) 2019-10-18 2024-04-23 Meta Platforms, Inc. Multimodal entity and coreference resolution for assistant systems
US11948563B1 (en) 2019-10-18 2024-04-02 Meta Platforms, Inc. Conversation summarization during user-control task execution for assistant systems
US11861674B1 (en) 2019-10-18 2024-01-02 Meta Platforms Technologies, Llc Method, one or more computer-readable non-transitory storage media, and a system for generating comprehensive information for products of interest by assistant systems
US11704745B2 (en) 2019-10-18 2023-07-18 Meta Platforms, Inc. Multimodal dialog state tracking and action prediction for assistant systems
US11699194B2 (en) 2019-10-18 2023-07-11 Meta Platforms Technologies, Llc User controlled task execution with task persistence for assistant systems
US11694281B1 (en) 2019-10-18 2023-07-04 Meta Platforms, Inc. Personalized conversational recommendations by assistant systems
US11688021B2 (en) 2019-10-18 2023-06-27 Meta Platforms Technologies, Llc Suppressing reminders for assistant systems
US20210117681A1 (en) 2019-10-18 2021-04-22 Facebook, Inc. Multimodal Dialog State Tracking and Action Prediction for Assistant Systems
US11688022B2 (en) 2019-10-18 2023-06-27 Meta Platforms, Inc. Semantic representations using structural ontology for assistant systems
US11669918B2 (en) 2019-10-18 2023-06-06 Meta Platforms Technologies, Llc Dialog session override policies for assistant systems
US11567788B1 (en) 2019-10-18 2023-01-31 Meta Platforms, Inc. Generating proactive reminders for assistant systems
WO2021086729A1 (en) * 2019-11-01 2021-05-06 Microsoft Technology Licensing, Llc Automatic detection of presentation surface and generation of associated data stream
US11800213B2 (en) 2020-03-30 2023-10-24 Logitech Europe S.A. Advanced video conferencing systems and methods
US10972655B1 (en) 2020-03-30 2021-04-06 Logitech Europe S.A. Advanced video conferencing systems and methods
US10904446B1 (en) 2020-03-30 2021-01-26 Logitech Europe S.A. Advanced video conferencing systems and methods
US11336817B2 (en) 2020-03-30 2022-05-17 Logitech Europe S.A. Advanced video conferencing systems and methods
US10951858B1 (en) 2020-03-30 2021-03-16 Logitech Europe S.A. Advanced video conferencing systems and methods
US10965908B1 (en) 2020-03-30 2021-03-30 Logitech Europe S.A. Advanced video conferencing systems and methods
CN111953658A (en) * 2020-07-20 2020-11-17 广州灏博信息技术有限公司 Paperless intelligent conference management system and method
US11562639B2 (en) 2020-08-24 2023-01-24 Logitech Europe S.A. Electronic system and method for improving human interaction and activities
US11562638B2 (en) 2020-08-24 2023-01-24 Logitech Europe S.A. Electronic system and method for improving human interaction and activities
US11418559B2 (en) 2020-09-21 2022-08-16 Logitech Europe S.A. Content distribution system
US11445457B2 (en) 2020-09-21 2022-09-13 Logitech Europe S.A. Content distribution system
US20220109822A1 (en) * 2020-10-02 2022-04-07 Facebook Technologies, Llc Multi-sensor camera systems, devices, and methods for providing image pan, tilt, and zoom functionality
US11470162B2 (en) * 2021-01-30 2022-10-11 Zoom Video Communications, Inc. Intelligent configuration of personal endpoint devices
US20220247824A1 (en) * 2021-01-30 2022-08-04 Zoom Video Communications, Inc. Intelligent configuration of personal endpoint devices
US11350029B1 (en) 2021-03-29 2022-05-31 Logitech Europe S.A. Apparatus and method of detecting and displaying video conferencing groups
EP4228250A1 (en) * 2021-10-14 2023-08-16 COCOSOFT Systems GmbH Method for controlling a video conference system of a business media system
WO2023167693A1 (en) * 2022-02-23 2023-09-07 Huddly Inc. Autonomous video conferencing system with virtual director assistance
US20230269468A1 (en) * 2022-02-23 2023-08-24 Huddly As Autonomous video conferencing system with virtual director assistance

Similar Documents

Publication Publication Date Title
US9270941B1 (en) Smart video conferencing system
US10750124B2 (en) Methods and system for simulated 3D videoconferencing
WO2017215295A1 (en) Camera parameter adjusting method, robotic camera, and system
US9172907B2 (en) Method and apparatus for dynamically adjusting aspect ratio of images during a video call
US11076127B1 (en) System and method for automatically framing conversations in a meeting or a video conference
US20100289904A1 (en) Video capture device providing multiple resolution video feeds
US20130100235A1 (en) Face equalization in video conferencing
US8687046B2 (en) Three-dimensional (3D) video for two-dimensional (2D) video messenger applications
US9344678B2 (en) Information processing apparatus, information processing method and computer-readable storage medium
JP2018533264A (en) Equipment for video communication
US20170127020A1 (en) Communication system, communication device, and communication method
US20230283888A1 (en) Processing method and electronic device
JP7334470B2 (en) VIDEO PROCESSING DEVICE, VIDEO CONFERENCE SYSTEM, VIDEO PROCESSING METHOD, AND PROGRAM
US20170374319A1 (en) Video image generation system and video image generating method thereof
JP2022036991A (en) Image processing apparatus, image processing method, and program
Nguyen et al. ITEM: Immersive telepresence for entertainment and meetings—A practical approach
US10951860B2 (en) Methods, systems, and apparatus for providing video communications
EP4075794A1 (en) Region of interest based adjustment of camera parameters in a teleconferencing environment
US20230291782A1 (en) Orchestrating a multidevice video session
CN112887653B (en) Information processing method and information processing device
CN113632458A (en) System, algorithm and design for wide angle camera perspective experience
WO2022051920A1 (en) Tracking with multiple cameras
US20240089436A1 (en) Dynamic Quantization Parameter for Encoding a Video Frame
WO2024028843A2 (en) Systems and methods for framing meeting environments and participants
WO2024019713A1 (en) Copresence system

Legal Events

Date Code Title Description
AS Assignment

Owner name: LOGITECH EUROPE S.A., SWITZERLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LAVELLE, MARK;REEL/FRAME:035189/0181

Effective date: 20150316

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8