US20150015663A1 - Video chat data processing - Google Patents

Video chat data processing Download PDF

Info

Publication number
US20150015663A1
US20150015663A1 US13/940,883 US201313940883A US2015015663A1 US 20150015663 A1 US20150015663 A1 US 20150015663A1 US 201313940883 A US201313940883 A US 201313940883A US 2015015663 A1 US2015015663 A1 US 2015015663A1
Authority
US
United States
Prior art keywords
video
data
video chat
graphics
data stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/940,883
Other versions
US9232177B2 (en
Inventor
Sankaranarayanan Venkatasubramanian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US13/940,883 priority Critical patent/US9232177B2/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VENKATASUBRAMANIAN, SANAKARANARAYANAN
Priority to JP2014139916A priority patent/JP5879609B2/en
Priority to EP14176712.9A priority patent/EP2824930A1/en
Priority to CN201410331139.4A priority patent/CN104284129B/en
Publication of US20150015663A1 publication Critical patent/US20150015663A1/en
Priority to JP2015257418A priority patent/JP6322834B2/en
Application granted granted Critical
Publication of US9232177B2 publication Critical patent/US9232177B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N19/00533
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Definitions

  • Video chat applications enable a user of a computing system to communicate with at least one user of another computing system by transmitting video chat data from one computing system to the another computing system across a network, such as the Internet.
  • the video chat data may be encoded using a codec at the one computing system, then packaged according to network protocols and sent across the network.
  • the encoded video chat data may be received at the another computing system from the network, extracted from the package, and decoded using the codec.
  • the particular codec used by the video chat application may not be revealed to other components and modules of the one computing system or the another computing system.
  • FIG. 1 is a block diagram of a computing device that may be used with video chat data processing
  • FIG. 2 is a process flow diagram of a method that enables video chat data processing
  • FIG. 3 is an illustration showing a manner of decoding incoming video chat data
  • FIG. 4 is a control flow diagram of a system showing a manner of decoding incoming video chat data
  • FIG. 5 is a process flow diagram of another method that enables video chat data processing
  • FIG. 6 is an illustration showing a manner of encoding captured video chat data
  • FIG. 7 is a control flow diagram of a system showing a manner of encoding captured video chat data
  • FIG. 8 is a block diagram of an exemplary system that processes video chat data
  • FIG. 9 is a schematic of a small form factor device in which the system of FIG. 8 may be embodied.
  • FIG. 10 is a block diagram showing tangible, non-transitory computer-readable media that stores code for video chat data processing.
  • Video chat data includes the images, text, audio, and video associated with a video chat session.
  • Each video chat application encodes or decodes the graphics portion of the video chat data using a codec.
  • a codec is a software or hardware component of a computing device that can encode or decode a stream of data. In some cases, the data is encoded and decoded for data compression purposes.
  • the graphics portion of the video chat data includes raw video data.
  • a video chat application typically implements a codec using software algorithms to compress the data stream.
  • the software algorithms used by the video chat application to implement a codec are executed using the central processing unit (CPU).
  • CPU central processing unit
  • Graphics hardware includes, but is not limited to, a graphics processing unit (GPU), fixed function hardware, video encode logic, video decode logic, and graphics engines.
  • Video chat applications typically perform the encode or decode functions without using the graphics hardware, as standard interfaces with the graphics hardware may not be available.
  • the graphics hardware is capable of faster, more efficient hardware based encoding and decoding when compared to the software encoding and decoding functions of video chat applications.
  • the power consumption during video chat may be relatively high.
  • the performance of the video chat application may result in the rendered video being slow or choppy, depending on the data throughput of the CPU.
  • Embodiments described herein enable video chat data processing.
  • the graphics, camera, and network drivers may be embedded with intelligence or logic that transfers encode, decode, and post-processing functionality from the video chat application to the graphics hardware.
  • enhanced performance may refer to the improved quality of the video, text, images, and sound presented to the user as well as improved system power consumption.
  • the graphics hardware may process video chat data when standard interfaces with the graphics hardware are not available.
  • a destination computing system and a source computing system are used to describe functions during a video chat session. However, a single system can perform the functions of the destination computing system and the source computing system simultaneously. Moreover, any number of computing systems can participate in a video chat session.
  • Coupled may mean that two or more elements are in direct physical or electrical contact. However, “coupled” may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
  • Some embodiments may be implemented in one or a combination of hardware, firmware, and software. Some embodiments may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by a computing platform to perform the operations described herein.
  • a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine, e.g., a computer.
  • a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; or electrical, optical, acoustical or other form of propagated signals, e.g., carrier waves, infrared signals, digital signals, or the interfaces that transmit and/or receive signals, among others.
  • An embodiment is an implementation or example.
  • Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” “various embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the present techniques.
  • the various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. Elements or aspects from an embodiment can be combined with elements or aspects of another embodiment.
  • the elements in some cases may each have a same reference number or a different reference number to suggest that the elements represented could be different and/or similar.
  • an element may be flexible enough to have different implementations and work with some or all of the systems shown or described herein.
  • the various elements shown in the figures may be the same or different. Which one is referred to as a first element and which is called a second element is arbitrary.
  • FIG. 1 is a block diagram of a computing device 100 that may be used with video chat data processing.
  • the computing device 100 may be, for example, a laptop computer, desktop computer, ultrabook, tablet computer, mobile device, smart phone, smart TV, or server, among others.
  • the computing device 100 may include a central processing unit (CPU) 102 that is configured to execute stored instructions, as well as a memory device 104 that stores instructions that are executable by the CPU 102 .
  • the CPU 102 may be coupled to the memory device 104 by a bus 106 .
  • the CPU 102 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations.
  • the computing device 100 may include more than one CPU 102 .
  • the computing device 100 may also include a graphics processing unit (GPU) 108 .
  • the CPU 102 may be coupled through the bus 106 to the GPU 108 .
  • the memory device 104 may store instructions that are executable by the GPU 108 .
  • the GPU 108 may be configured to perform any number of graphics operations within the computing device 100 .
  • the GPU 108 may be configured to render or manipulate graphics data such as graphics images, graphics frames, videos, or the like, to be displayed to a user of the computing device 100 .
  • the graphics data may be rendered during a video chat session.
  • the memory device 104 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems.
  • the memory device 104 may include dynamic random access memory (DRAM).
  • the memory device 104 includes one or more drivers 110 .
  • the drivers may include device drivers such as a graphics driver, a network driver, or a camera driver.
  • hardware components such as a graphics encoder may be operated by an encoding module of the graphics driver.
  • a graphics decoder may be operated by a decoding module of the graphics driver.
  • the computing device 100 also includes an image capture mechanism 112 .
  • the image capture mechanism 112 is a camera, webcam, stereoscopic camera, scanner, infrared sensor, or the like.
  • the image capture mechanism 112 may be used to capture graphics data, such as a video data stream or an image, during a video chat session.
  • the graphics driver, the network driver, the camera driver, or any combinations thereof, can detect that the video chat application has initialized or started a video chat session using hints.
  • a hint is any action that can indicate the start of a video chat session.
  • the graphics, network, and camera drivers may each receive different hints based on the manner in which the video chat session was initialized by the video chat application.
  • the graphics driver may receive a call from the video chat application to create surfaces for rendering the video that is displayed during a video chat session.
  • the call received serves as a hint to the graphics driver of an impending video chat session.
  • the camera driver may receive a notification or request for access to the camera functionality from the video chat application.
  • the notification or request serves as a hint to the camera driver of an impending video chat session.
  • the network driver may detect data packets that are being sent or received by the video chat application.
  • the detection of data packets to or from the video chat application may serve as a hint to the network driver of an impending video chat session.
  • a user-mode module of each device driver may also be used to detect hints that indicate the start of a video chat session.
  • the user-mode module of a device driver enables the device driver to execute in the user-mode space of a computing system, rather than in the privileged space of the kernel mode.
  • the device drivers call an application programming interface (API) to access system hardware.
  • API application programming interface
  • a user-mode module of the device driver may determine the identity of the application that is requesting service from the device driver. When the application requesting service is a video chat application, the device driver may use the request by the video chat application as a hint that a video chat session has been initialized.
  • the encoding, decoding, and post-processing functionality can be transferred from the video chat application to the GPU hardware.
  • the GPU hardware is specialized for media functions such as encoding, decoding, and post-processing.
  • the video codec format used by the video chat application may be implemented by the encoder hardware and the decoder hardware of the GPU subsystem.
  • the specialized GPU hardware can provide faster, more efficient encoding, decoding, and post-processing functionality when compared to the encoding, decoding, and post-processing functionality of the video chat application.
  • the video chat application implements such functionality using software, as discussed above. Accordingly, the performance of the video chat application may be improved by using the GPU hardware.
  • the computing device can save power by using the GPU hardware for the encoding, decoding, and post-processing functionality of the video chat application.
  • the CPU 102 may also be connected through the bus 106 to an input/output (I/O) device interface 114 configured to connect the computing device 100 to one or more I/O devices 116 .
  • the I/O devices 116 may include, for example, a keyboard and a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others.
  • the I/O devices 116 may be built-in components of the computing device 100 , or may be devices that are externally connected to the computing device 100 .
  • the CPU 102 may be linked through the bus 106 to a display interface 118 configured to connect the computing device 100 to one or more display devices 120 .
  • the display devices 120 may include a display screen that is a built-in component of the computing device 100 .
  • the display devices 120 may also include a computer monitor, television, or projector, among others, that is externally connected to the computing device 100 .
  • the computing device also includes a storage device 122 .
  • the storage device 122 is a physical memory such as a hard drive, an optical drive, a thumbdrive, an array of drives, or any combinations thereof.
  • the storage device 122 may also include remote storage drives.
  • the storage device 122 also includes applications 124 .
  • the applications 124 may include a video chat application.
  • the computing device 100 may also include a network interface controller (NIC) 126 configured to connect the computing device 100 through the bus 106 to a network 128 .
  • the network 128 may be a wide area network (WAN), local area network (LAN), or the Internet, among others.
  • the data transmitted across a network is described as streaming data, wherein the streaming data includes data that is in packets according to a network protocol.
  • the packet data includes, but is not limited to, image frames and corresponding audio data.
  • FIG. 1 The block diagram of FIG. 1 is not intended to indicate that the computing device 100 is to include all of the components shown in FIG. 1 . Further, the computing device 100 may include any number of additional components not shown in FIG. 1 , depending on the details of the specific implementation. Moreover, the computing device 100 may be implemented as a system on chip (SOC). In an SOC implementation, various components of the computing device 100 are combined onto a single chip substrate.
  • SOC system on chip
  • the video chat application may send a notification to the graphics driver to create a surface that is to render video processed by the video chat application.
  • a surface is an area of memory where graphics data is written.
  • the surface created at the request of the video chat application is a render target surface, which may be located in an area of memory that is not managed by the GPU hardware.
  • the GPU will track the surface created at the request of the video chat application by setting a flag for each surface.
  • the flag may be stored by the graphics driver software.
  • a table may be used to indicate the allocated surfaces along with a Boolean value to indicate if the surface is tracked.
  • the graphics driver performs a video post processing on the surface that contains the decoded data and will not render the surface generated by the application. This process may be known as trapping the surface.
  • the graphics driver may also change the create surface notification sent by the video chat application from a render target surface to a video process render target surface. In this manner, the surfaces created by the video chat application are converted from a render target surface to a video process render target surface.
  • Video process render target surfaces are managed by the GPU hardware. Further, in some embodiments, the video process render target surface may be located on the graphics card.
  • the graphics driver may also create additional surfaces for decode render targets. The decode render targets may be used to send decoded graphics data to the post processing engine.
  • the graphics driver can also inform the network driver and the camera driver that a video chat session has been initiated by the video chat application.
  • the network driver may monitor the ports used for sending or receiving video chat data packets for any video chat application activity. If the network driver detects a packet from the video chat application, it marks the traffic from the video chat session as a means of tracking the video chat session. In some cases, the network driver may mark the traffic by using marker bits or any other marking mechanism. The network driver can detect a packet from the video chat application through examining the packet header or data inspection. The network driver may then inform the graphics driver and the camera driver that a video chat session has been initialized.
  • computing systems may be designated as source computing systems and destination computing systems.
  • the source computing system is a computing system that is sending video chat data
  • a destination computing system is a computing system that receives video chat data.
  • Each computing system may be both a source computing system and a destination computing system simultaneously during the video chat session. However, for ease of description, each computing system is described as a source computing system or a destination computing system.
  • the source computing system may send video chat data to a destination computing system, where the video chat data includes audio and text.
  • the graphics driver neither the graphics driver nor the camera driver will participate in the workflow at the source computing system, as an image capture mechanism is not present within the source computing system and the graphics hardware has no graphics data to process.
  • the network driver of the source computing system may be the first driver to detect a new video chat session through the use of various hints. The network driver may notify the other drivers of the new video chat session.
  • the graphics driver of the destination computing system does not enable decoding or post-processing functionality of the graphics hardware, as there is no graphics data to process. Accordingly, the manner in which the drivers are notified of an impending video chat session depends on the features of the video chat session.
  • FIG. 2 is a process flow diagram of a method 200 that enables video chat data processing at a destination computing system.
  • the video chat data may be formatted into data packets with various headers according to a networking protocol.
  • Networking protocols include, but are not limited to, the Hyper Text Transfer Protocol (HTTP), the Transmission Control Protocol (TCP), the Internet Protocol (IP), the User Datagram Protocol (UDP), and the Real Time Communication (RTC) protocol.
  • the incoming data packets are pre-processed. Pre-processing the data packets includes ripping the video data from the data packet and sending the video data to the graphics driver for decoding.
  • the network driver may rip the video data stream from the incoming data stream. As discussed above, the network driver may detect a packet associated with the video chat application through port inspection or deep packet header inspection. In some embodiments, another module other than the network driver may be used to rip the video data stream from the incoming data stream.
  • the video data stream is sent to the graphics subsystem while a null video stream is sent to the video chat application.
  • the network driver is used to route the video data stream from the video chat application to the graphics subsystem.
  • the null video stream, sent to the video chat application is a place holder for the actual video data that is processed by the graphics hardware.
  • a packet containing the audio data stream from the incoming data stream is sent to the video chat application along with the null video stream.
  • the video chat application can recognize the audio data stream as an audio session and process the audio data stream so that the audio data stream can be rendered.
  • the network driver may rip the video data onto a shared area or memory. This area of memory may be shared by the graphics hardware and the network driver. The network driver rips the video data to the shared memory, and then informs the graphics driver that there is video data in the shared memory that is waiting to be consumed by the graphics subsystem.
  • the video data stream is processed using the graphics subsystem.
  • the graphics hardware receives the video data stream and then decodes the stream.
  • the graphics hardware may also perform post-processing such as color conversion, scaling, and de-interlacing of the video data.
  • the graphics subsystem can perform any additional image enhancement operations on the decoded video data stream, including, but not limited to, skin toning, and hue, saturation and brightness controlling.
  • the video chat application performs decoding and post-processing functions on the null video stream simultaneously.
  • the null stream is a placeholder, the packets sent to the video application do not contain any data. As a result, the no data processing is done by the video chat application. In this manner, the video chat application functions in the same manner as when the video data is sent to the video chat application.
  • the processed video data is rendered.
  • the graphics driver traps the surfaces so that the placeholder data from the video chat application is not rendered onto the surface.
  • the graphics driver renders the processed video data stream from the graphics subsystem onto the surface.
  • trapping the surface refers to the process by which the graphics driver prevents the video chat application from accessing the surfaces.
  • the graphics driver may discard or ignore any data received from the video chat application for rendering.
  • the surfaces are trapped according to the flag that was set by the graphics driver when the video chat application sent a request to create the surfaces.
  • the video chat application executes a render function to these surfaces, even when the video chat application receives a null video stream for processing, as the video chat application is unaware that it is not handling the decode and post processing of the video data stream.
  • the video may be rendered to a display from the surface.
  • FIG. 3 is an illustration 300 showing a manner of decoding incoming video chat data at a destination computing system.
  • Incoming data packets 302 from a network are received by network hardware 304 .
  • a network driver 306 detects the video chat data received by the network hardware 304 .
  • the network driver 306 may inform the graphics driver 308 of the video chat session.
  • the graphics driver after receiving this notification, can track surfaces created by the video chat application 310 .
  • the network driver 306 may cause the network hardware 304 to separate the video data stream from the incoming data stream.
  • the network hardware 304 may also forward the video data stream to a decode module 308 A of the graphics driver 308 .
  • the network driver 306 also causes the network hardware 304 to send a null video stream to the video chat application 310 .
  • the decode module 308 A of the graphics driver 308 causes the graphics decode and post-processing hardware 312 to perform decoding and post-processing functions on the video data stream as described above.
  • a render module 308 B of the graphics driver 308 then causes the GPU Render Hardware 314 to render the decoded and post-processed data.
  • the render module 308 B of the graphics driver 308 is aware that the data sent by the video chat application at block 310 is “dummy” data. Any video data from the video chat application is ignored.
  • the render module 308 B of the graphics driver 308 writes the decoded and post processed video data stream from the graphics decode and post-processing hardware 312 to a surface to be rendered.
  • FIG. 4 is a control flow diagram of a system 400 showing a manner of decoding incoming video chat data.
  • Incoming data packets 302 are input to the system 400 .
  • the network driver 306 inspects the port where video data enters the system at reference number 402 .
  • a video chat decode session is created, and the data is sent to the graphics hardware.
  • the video data is extracted from each packet of data.
  • the extracted video packet is sent to the graphics hardware to be decoded.
  • the decode module 308 A of the graphics driver may operate the graphics hardware in order to decode the video data.
  • the network driver instructs the network hardware to send a null video stream and the audio data extracted from the incoming data packet to the video chat application 310 .
  • a decode session may be initiated at reference number 412 .
  • the incoming frames from the video data stream are continually decoded using the graphics hardware as indicated at reference number 414 .
  • post-processing is performed on the video data frames.
  • the graphics driver 308 may cause the graphics hardware to post-processing the decoded video data and render the data as discussed above.
  • the video chat application 310 sends video data to be rendered at reference number 418 .
  • the video data from the video chat application 310 is not rendered, as the packets processed by the video chat application were null data packets, and the resulting video surface contains no information.
  • the graphics driver 308 causes the decoded and post-processed frames received from the graphics hardware to be rendered.
  • FIG. 5 is a process flow diagram of a method 500 that enables video chat data processing at a source computing system.
  • the device drivers may use hints to detect the start of a video chat session. In this manner, the drivers can re-route video chat data so that it may be processed by the graphics subsystem.
  • the camera driver may inform the graphics driver and the network driver that a video chat session has been started the camera driver detects a request for access from the video chat application.
  • the video data is captured.
  • an image capture mechanism such as a camera is used to capture the video data.
  • audio data may be captured using a microphone.
  • the camera driver causes the image capture mechanism to send the captured video data to the graphics subsystem for encoding.
  • the captured video data may be sent to an encoder of the graphics subsystem.
  • the camera driver also causes the image capture mechanism to send null video data to the video chat application.
  • the audio captured by the microphone is also sent to the video chat application, so that the resulting packet from the video chat application includes the correct audio and null video.
  • the captured video data is processed using the graphics subsystem.
  • the encode module of the graphics driver uses the graphics hardware, such as an encoder, to encode the captured video data.
  • the encoded data from the graphics subsystem is prepared for transmission across a network.
  • the encoded data is sent to the network driver.
  • the network hardware can intercept the packet sent from the video chat application for transmission across the network, and repackage the packet by inserting the encoded video data from the graphics subsystem. Then the repackaged packet is then sent across the network according to the network protocol.
  • the network header information remains intact. For example, the network subsystem may use the packet header to keep track of the number of bytes that are sent in the packet.
  • the packet header may be modified to reflect the changes in packet size, thus maintaining accurate header information.
  • Such packaging is performed in accordance with the underlying networking protocol. This will ensure the packet that is transmitted will not be rejected when it is received.
  • the render module of the graphics driver causes the graphics subsystem to render the captured video data alongside the received video data within the video chat application of a source computing system.
  • a video chat application may render the received video chat data in a larger portion of the video chat application, while rendering the source video data in a smaller portion of the video chat application so that a user can see their own image.
  • the graphics subsystem re-uses the raw video stream forwarded by the image capture mechanism when it is called for presenting this source data.
  • color space conversion and scaling may be performed by the graphics hardware before the captured source data is rendered on the same computing system.
  • FIG. 6 is an illustration 600 showing a manner of encoding captured video chat data at a source computing system.
  • the camera software 602 is used to control the capture of raw video data using the camera driver 604 and the camera hardware 606 .
  • the camera software 602 sends null video data to the video chat application 608 .
  • the camera driver 604 causes the captured video data to be sent from the camera hardware 606 to the encoding hardware 612 of the graphics subsystem.
  • the camera driver 604 may also inform the encode module 610 A of the graphics driver and the render module 610 B of the graphics driver that a video chat session has started.
  • the encode module 610 A of the graphics driver communicates with the network driver 614 so that the packet received from the video chat application can be repackaged.
  • the encoded video data is sent to the network hardware 616 from the encoding hardware 612 of the graphics subsystem.
  • the network driver 614 causes the packet from the video chat application 608 to be combined with the encoded video data from the encoding hardware 612 of the graphics subsystem.
  • a multiplexer is used to combine the encoded video data from the encoding hardware 612 of the graphics subsystem with the data packet from the video chat application 608 that includes audio and null video.
  • the multiplexer may be implemented in hardware or software.
  • the network hardware 616 is used to transmit the repackaged data packet across the network.
  • the camera driver 604 also causes the captured video data from the camera hardware 606 to be sent to the GPU render hardware 618 of the GPU subsystem by communicating with the render module 610 B of the graphics driver.
  • the render module 610 B may cause the render hardware 618 of the graphics subsystem to render video data captured by the camera hardware 606 , thereby rendering the captured video data at the same computing system.
  • the video chat application 608 renders the source video data alongside the video and audio data received from a remote computing system.
  • FIG. 7 is a control flow diagram of a system 700 showing a manner of encoding captured video chat data.
  • the video chat application captures video frames at reference number 702 by synchronizing with the camera module 602 .
  • the synchronization includes requesting raw video data from the camera module 602 .
  • the camera module 602 notifies the network driver 614 of the new video chat session as indicated at reference number 704 .
  • the camera module 602 also notifies the encode module 610 A of the graphics driver of the new video chat session as indicated at reference number 706 .
  • the camera module 602 then sends null video frames to the video chat application 608 as indicated at reference number 708 .
  • the captured video data is sent from the camera hardware to the encode module 610 A of the graphics driver as indicated at reference number 710 .
  • the encode module 610 A of the graphics driver initializes the encode hardware of the graphics subsystem as indicated at reference number 712 .
  • the encode hardware of the graphics subsystem then encodes the incoming video data as indicated at reference number 714 .
  • the encode module 610 A of the graphics driver then causes the encoded video data to be sent to the network hardware as indicated at reference number 716 .
  • the corresponding audio data is also sent from the video chat application 608 to the network hardware in a packet as indicated at reference number 718 .
  • the network driver 614 initializes the port used to transmit the video chat data as indicated at reference number 720 .
  • the encoded data may be stored as indicated at reference number 722 .
  • the encoded data is then repackaged into the packet with the audio data from the video chat application as shown at reference number 724 .
  • the repackaged packet may be transmitted across a network at reference
  • FIG. 8 is a block diagram of an exemplary system 800 that processes video chat data. Like numbered items are as described with respect to FIG. 1 .
  • the system 800 is a media system.
  • the system 800 may be incorporated into a personal computer (PC), laptop computer, ultra-laptop computer, server computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, television, smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, a printing device, an embedded device or the like.
  • PC personal computer
  • laptop computer ultra-laptop computer
  • server computer tablet
  • touch pad portable computer
  • handheld computer handheld computer
  • palmtop computer personal digital assistant
  • PDA personal digital assistant
  • cellular telephone combination cellular telephone/PDA
  • television smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, a printing device, an embedded device or the
  • the system 800 comprises a platform 802 coupled to a display 804 .
  • the platform 802 may receive content from a content device, such as content services device(s) 806 or content delivery device(s) 808 , or other similar content sources.
  • a navigation controller 810 including one or more navigation features may be used to interact with, for example, the platform 802 and/or the display 804 . Each of these components is described in more detail below.
  • the platform 802 may include any combination of a chipset 812 , a central processing unit (CPU) 102 , a memory device 104 , a storage device 122 , a graphics subsystem 814 , applications 820 , and a radio 816 .
  • the chipset 812 may provide intercommunication among the CPU 102 , the memory device 104 , the storage device 122 , the graphics subsystem 814 , the applications 820 , and the radio 816 .
  • the chipset 812 may include a storage adapter (not shown) capable of providing intercommunication with the storage device 122 .
  • the applications 820 may be the applications 114 , the applications 202 , or the applications 502 as described above.
  • the components of the system 800 may be implemented as a system on chip (SOC). In an SOC implementation, all components of the platform 802 are combined onto a single chip substrate.
  • SOC system on chip
  • the CPU 102 may be implemented as Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU). In some embodiments, the CPU 102 includes multi-core processor(s), multi-core mobile processor(s), or the like.
  • the memory device 104 may be implemented as a volatile memory device such as, but not limited to, a Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), or Static RAM (SRAM).
  • RAM Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SRAM Static RAM
  • the storage device 122 may be implemented as a non-volatile storage device such as, but not limited to, a magnetic disk drive, optical disk drive, tape drive, solid state drive, an internal storage device, an attached storage device, flash memory, battery backed-up SDRAM (synchronous DRAM), and/or a network accessible storage device.
  • the storage device 122 includes technology to increase the storage performance enhanced protection for valuable digital media when multiple hard drives are included, for example.
  • the graphics subsystem 814 may perform processing of images such as still or video for display.
  • the graphics subsystem 814 may include a graphics processing unit (GPU), such as the GPU 108 , or a visual processing unit (VPU), for example.
  • An analog or digital interface may be used to communicatively couple the graphics subsystem 814 and the display 804 .
  • the interface may be any of a High-Definition Multimedia Interface, DisplayPort, wireless HDMI, and/or wireless HD compliant techniques.
  • the graphics subsystem 814 may be integrated into the CPU 102 or the chipset 812 .
  • the graphics subsystem 814 may be a stand-alone card communicatively coupled to the chipset 812 .
  • graphics and/or video processing techniques described herein may be implemented in various hardware architectures.
  • graphics and/or video functionality may be integrated within the chipset 812 .
  • a discrete graphics and/or video processor may be used.
  • the graphics and/or video functions may be implemented by a general purpose processor, including a multi-core processor.
  • the functions may be implemented in a consumer electronics device.
  • the radio 816 may include one or more radios capable of transmitting and receiving signals using various suitable wireless communications techniques. Such techniques may involve communications across one or more wireless networks. Exemplary wireless networks include wireless local area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area network (WMANs), cellular networks, satellite networks, or the like. In communicating across such networks, the radio 816 may operate in accordance with one or more applicable standards in any version.
  • WLANs wireless local area networks
  • WPANs wireless personal area networks
  • WMANs wireless metropolitan area network
  • cellular networks satellite networks, or the like.
  • the display 804 may include any television type monitor or display.
  • the display 804 may include a computer display screen, touch screen display, video monitor, television, or the like.
  • the display 804 may be digital and/or analog.
  • the display 804 is a holographic display.
  • the display 804 may be a transparent surface that may receive a visual projection.
  • Such projections may convey various forms of information, images, objects, or the like.
  • such projections may be a visual overlay for a mobile augmented reality (MAR) application.
  • MAR mobile augmented reality
  • the platform 802 may display a user interface 818 on the display 804 .
  • the content services device(s) 806 may be hosted by any national, international, or independent service and, thus, may be accessible to the platform 802 via the Internet, for example.
  • the content services device(s) 806 may be coupled to the platform 802 and/or to the display 804 .
  • the platform 802 and/or the content services device(s) 806 may be coupled to a network 124 to communicate (e.g., send and/or receive) media information to and from the network 124 .
  • the content delivery device(s) 808 also may be coupled to the platform 802 and/or to the display 804 .
  • the content services device(s) 806 may include a cable television box, personal computer, network, telephone, or Internet-enabled device capable of delivering digital information.
  • the content services device(s) 806 may include any other similar devices capable of unidirectionally or bidirectionally communicating content between content providers and the platform 802 or the display 804 , via the network 124 or directly. It will be appreciated that the content may be communicated unidirectionally and/or bidirectionally to and from any one of the components in the system 800 and a content provider via the network 124 .
  • Examples of content may include any media information including, for example, video, music, medical and gaming information, and so forth.
  • the content services device(s) 806 may receive content such as cable television programming including media information, digital information, or other content.
  • content providers may include any cable or satellite television or radio or Internet content providers, among others.
  • the platform 802 receives control signals from the navigation controller 810 , which includes one or more navigation features.
  • the navigation features of the navigation controller 810 may be used to interact with the user interface 818 , for example.
  • the navigation controller 810 may be a pointing device or a touchscreen device that may be a computer hardware component (specifically human interface device) that allows a user to input spatial (e.g., continuous and multi-dimensional) data into a computer.
  • Many systems such as graphical user interfaces (GUI), and televisions and monitors allow the user to control and provide data to the computer or television using physical gestures.
  • Physical gestures include but are not limited to facial expressions, facial movements, movement of various limbs, body movements, body language or any combinations thereof. Such physical gestures can be recognized and translated into commands or instructions.
  • Movements of the navigation features of the navigation controller 810 may be echoed on the display 804 by movements of a pointer, cursor, focus ring, or other visual indicators displayed on the display 804 .
  • the navigation features located on the navigation controller 810 may be mapped to virtual navigation features displayed on the user interface 818 .
  • the navigation controller 810 may not be a separate component but, rather, may be integrated into the platform 802 and/or the display 804 .
  • the system 800 may include drivers (not shown) that include technology to enable users to instantly turn on and off the platform 802 with the touch of a button after initial boot-up, when enabled, for example.
  • Program logic may allow the platform 802 to stream content to media adaptors or other content services device(s) 806 or content delivery device(s) 808 when the platform is turned “off.”
  • the chipset 812 may include hardware and/or software support for 6.1 surround sound audio and/or high definition 7.1 surround sound audio, for example.
  • the drivers may include a graphics driver for integrated graphics platforms.
  • the graphics driver includes a peripheral component interconnect express (PCIe) graphics card.
  • PCIe peripheral component interconnect express
  • any one or more of the components shown in the system 800 may be integrated.
  • the platform 802 and the content services device(s) 806 may be integrated; the platform 802 and the content delivery device(s) 808 may be integrated; or the platform 802 , the content services device(s) 806 , and the content delivery device(s) 808 may be integrated.
  • the platform 802 and the display 804 are an integrated unit.
  • the display 804 and the content service device(s) 806 may be integrated, or the display 804 and the content delivery device(s) 808 may be integrated, for example.
  • the system 800 may be implemented as a wireless system or a wired system.
  • the system 800 may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth.
  • An example of wireless shared media may include portions of a wireless spectrum, such as the RF spectrum.
  • the system 800 may include components and interfaces suitable for communicating over wired communications media, such as input/output (I/O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, or the like.
  • wired communications media may include a wire, cable, metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, or the like.
  • the platform 802 may establish one or more logical or physical channels to communicate information.
  • the information may include media information and control information.
  • Media information may refer to any data representing content meant for a user. Examples of content may include, for example, data from a voice conversation, videoconference, streaming video, electronic mail (email) message, voice mail message, alphanumeric symbols, graphics, image, video, text, and the like. Data from a voice conversation may be, for example, speech information, silence periods, background noise, comfort noise, tones, and the like.
  • Control information may refer to any data representing commands, instructions or control words meant for an automated system. For example, control information may be used to route media information through a system, or instruct a node to process the media information in a predetermined manner. The embodiments, however, are not limited to the elements or the context shown or described in FIG. 8 .
  • FIG. 9 is a schematic of a small form factor device 900 in which the system 800 of FIG. 8 may be embodied. Like numbered items are as described with respect to FIG. 8 .
  • the device 900 is implemented as a mobile computing device having wireless capabilities.
  • a mobile computing device may refer to any device having a processing system and a mobile power source or supply, such as one or more batteries, for example.
  • examples of a mobile computing device may include a personal computer (PC), laptop computer, ultra-laptop computer, server computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, television, smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, and the like.
  • PC personal computer
  • laptop computer ultra-laptop computer
  • server computer tablet
  • touch pad portable computer
  • handheld computer handheld computer
  • palmtop computer personal digital assistant
  • PDA personal digital assistant
  • cellular telephone e.g., cellular telephone/PDA
  • television smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, and the like.
  • smart device e.g., smart phone, smart tablet or smart television
  • MID mobile internet device
  • An example of a mobile computing device may also include a computer that is arranged to be worn by a person, such as a wrist computer, finger computer, ring computer, eyeglass computer, belt-clip computer, arm-band computer, shoe computer, clothing computer, or any other suitable type of wearable computer.
  • the mobile computing device may be implemented as a smart phone capable of executing computer applications, as well as voice communications and/or data communications.
  • voice communications and/or data communications may be described with a mobile computing device implemented as a smart phone by way of example, it may be appreciated that other embodiments may be implemented using other wired or wireless mobile computing devices as well.
  • the device 900 may include a housing 902 , a display 904 , an input/output (I/O) device 906 , and an antenna 908 .
  • the device 900 may also include navigation features 912 .
  • the display 904 may include any suitable display 910 unit for displaying information appropriate for a mobile computing device.
  • the I/O device 906 may include any suitable I/O device for entering information into a mobile computing device.
  • the I/O device 906 may include an alphanumeric keyboard, a numeric keypad, a touch pad, input keys, buttons, switches, rocker switches, microphones, speakers, a voice recognition device and software, or the like. Information may also be entered into the device 900 by way of microphone. Such information may be digitized by a voice recognition device.
  • FIG. 10 is a block diagram showing tangible, non-transitory computer-readable media 1000 that stores code for a video chat data processing.
  • the tangible, non-transitory computer-readable media 1000 may be accessed by a processor 1002 over a computer bus 1004 .
  • the tangible, non-transitory computer-readable medium 1000 may include code configured to direct the processor 1002 to perform the methods described herein.
  • an encode module 1006 may be configured to cause the graphics subsystem to encode image data.
  • a decode module 1008 may be configured to cause the graphics subsystem to decode incoming data packets received from a network.
  • a post-processing module may be configured to cause the graphics subsystem to perform post-processing of the decoded video data.
  • a render module may be used to cause the graphics subsystem to render video data.
  • FIG. 10 The block diagram of FIG. 10 is not intended to indicate that the tangible, non-transitory computer-readable media 1000 is to include all of the components shown in FIG. 10 . Further, the tangible, non-transitory computer-readable media 1000 may include any number of additional components not shown in FIG. 10 , depending on the details of the specific implementation.
  • the graphics processing unit includes a decoder.
  • the decoder is to decode a video data stream from an incoming data stream.
  • the graphics processing unit also includes a post processor.
  • the post processor is to perform post-processing of the decoded video data stream.
  • the graphics processing unit includes a renderer.
  • the renderer is to render the post processed video data stream and discard a null video data stream from a video chat application during a video chat session.
  • a device driver that may detect the video chat session so that the video data stream from the incoming data is sent to the decoder, and the null video stream is sent to the video chat application.
  • the incoming data stream may include the video data stream and an audio data stream, and the video chat application may receive the audio data stream and the null video data stream.
  • the decoder may decode the video data stream according to a codec of the video chat application. Additionally, post processing the decoded data stream may include any image enhancements to the video data stream. Further, hints may be used to detect a new video chat session.
  • the system includes an encoder, and the encoder is to encode image data.
  • the system also includes a multiplexer, and the multiplexer is to repackage the encoded image data with a data packet from a video chat application during a video chat session.
  • the system includes networking logic, and the networking logic is to transmit the repackaged data packet across a network.
  • the encoder may encode the video data stream according to a codec of the video chat application.
  • An image capture device may capture image data and send the image data to the encoder.
  • the image capture device may also send null image data to the video chat application.
  • the data packet from the video chat application may include null image data and audio data from the video chat session.
  • a renderer may render the image data using the video chat application during the video chat session. Hints may be used to detect a new video chat session.
  • the system includes a display, a radio, and a memory that is to store instructions and that is communicatively coupled to the display.
  • the system also includes a processor communicatively coupled to the radio and the memory, wherein when the processor is to execute the instructions, the processor is to detect a video chat session by an application.
  • the processor also encodes image data that is to be transmitted across a network, wherein a graphics subsystem encodes the image data from an image capture device and the application receives null image data from an image capture device.
  • the processor decodes incoming data packets received from the network, wherein networking logic rips the encoded video data from the incoming data packets and sends the encoded video data to the graphics subsystem to be decoded, and the networking logic sends null video data to the application.
  • the processor may be a graphics processing unit. Additionally, the system may be a system on chip. Further, the encode and decode functions may be performed by the processor instead of the application.
  • a tangible, non-transitory, computer-readable medium comprising code to direct a processor is described herein.
  • the code may direct the processor to encode image data that is to be transmitted across a network, wherein the processor encodes the image data instead of a video chat application.
  • the code may also direct the processor to decode incoming data packets received from the network, wherein encoded image data is ripped from the incoming data packets and sent to the processor to be decoded.
  • the image data may be received from an image capture device. Additionally, the video chat application may encode null video data when the processor encodes the image data. The video chat application may receive null data packets when the processor receives encoded image data.

Abstract

A graphics processing unit and a system are described herein. The graphics processing unit includes a decoder, a post processor, and a renderer. The decoder is to decode a video data stream from an incoming data stream. The post processor is to perform post-processing of the decoded video data stream. The renderer is to render the post processed video data stream and discard a null video data stream from a video chat application during a video chat session.

Description

    BACKGROUND ART
  • Video chat applications enable a user of a computing system to communicate with at least one user of another computing system by transmitting video chat data from one computing system to the another computing system across a network, such as the Internet. The video chat data may be encoded using a codec at the one computing system, then packaged according to network protocols and sent across the network. The encoded video chat data may be received at the another computing system from the network, extracted from the package, and decoded using the codec. As the video chat data is encoded and decoded, the particular codec used by the video chat application may not be revealed to other components and modules of the one computing system or the another computing system.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a computing device that may be used with video chat data processing;
  • FIG. 2 is a process flow diagram of a method that enables video chat data processing;
  • FIG. 3 is an illustration showing a manner of decoding incoming video chat data;
  • FIG. 4 is a control flow diagram of a system showing a manner of decoding incoming video chat data;
  • FIG. 5 is a process flow diagram of another method that enables video chat data processing;
  • FIG. 6 is an illustration showing a manner of encoding captured video chat data;
  • FIG. 7 is a control flow diagram of a system showing a manner of encoding captured video chat data;
  • FIG. 8 is a block diagram of an exemplary system that processes video chat data;
  • FIG. 9 is a schematic of a small form factor device in which the system of FIG. 8 may be embodied; and
  • FIG. 10 is a block diagram showing tangible, non-transitory computer-readable media that stores code for video chat data processing.
  • The same numbers are used throughout the disclosure and the figures to reference like components and features. Numbers in the 100 series refer to features originally found in FIG. 1; numbers in the 200 series refer to features originally found in FIG. 2; and so on.
  • DESCRIPTION OF THE EMBODIMENTS
  • Video chat data includes the images, text, audio, and video associated with a video chat session. Each video chat application encodes or decodes the graphics portion of the video chat data using a codec. A codec is a software or hardware component of a computing device that can encode or decode a stream of data. In some cases, the data is encoded and decoded for data compression purposes. The graphics portion of the video chat data includes raw video data. A video chat application typically implements a codec using software algorithms to compress the data stream. The software algorithms used by the video chat application to implement a codec are executed using the central processing unit (CPU). However, hardware encode and decode functionality is available on most computing systems through graphics hardware. Graphics hardware includes, but is not limited to, a graphics processing unit (GPU), fixed function hardware, video encode logic, video decode logic, and graphics engines.
  • Video chat applications typically perform the encode or decode functions without using the graphics hardware, as standard interfaces with the graphics hardware may not be available. In some cases, the graphics hardware is capable of faster, more efficient hardware based encoding and decoding when compared to the software encoding and decoding functions of video chat applications. Additionally, by using the CPU to execute encode and decode functionality of a video chat application, the power consumption during video chat may be relatively high. Moreover, the performance of the video chat application may result in the rendered video being slow or choppy, depending on the data throughput of the CPU.
  • Embodiments described herein enable video chat data processing. In some embodiments, the graphics, camera, and network drivers may be embedded with intelligence or logic that transfers encode, decode, and post-processing functionality from the video chat application to the graphics hardware. In this manner, the performance of the video chat is enhanced, where enhanced performance may refer to the improved quality of the video, text, images, and sound presented to the user as well as improved system power consumption. Furthermore, in some embodiments, the graphics hardware may process video chat data when standard interfaces with the graphics hardware are not available. For ease of description, a destination computing system and a source computing system are used to describe functions during a video chat session. However, a single system can perform the functions of the destination computing system and the source computing system simultaneously. Moreover, any number of computing systems can participate in a video chat session.
  • In the following description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, “connected” may be used to indicate that two or more elements are in direct physical or electrical contact with each other. “Coupled” may mean that two or more elements are in direct physical or electrical contact. However, “coupled” may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
  • Some embodiments may be implemented in one or a combination of hardware, firmware, and software. Some embodiments may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by a computing platform to perform the operations described herein. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine, e.g., a computer. For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; or electrical, optical, acoustical or other form of propagated signals, e.g., carrier waves, infrared signals, digital signals, or the interfaces that transmit and/or receive signals, among others.
  • An embodiment is an implementation or example. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” “various embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the present techniques. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. Elements or aspects from an embodiment can be combined with elements or aspects of another embodiment.
  • Not all components, features, structures, characteristics, etc. described and illustrated herein need be included in a particular embodiment or embodiments. If the specification states a component, feature, structure, or characteristic “may”, “might”, “can” or “could” be included, for example, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the element. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element.
  • It is to be noted that, although some embodiments have been described in reference to particular implementations, other implementations are possible according to some embodiments. Additionally, the arrangement and/or order of circuit elements or other features illustrated in the drawings and/or described herein need not be arranged in the particular way illustrated and described. Many other arrangements are possible according to some embodiments.
  • In each system shown in a figure, the elements in some cases may each have a same reference number or a different reference number to suggest that the elements represented could be different and/or similar. However, an element may be flexible enough to have different implementations and work with some or all of the systems shown or described herein. The various elements shown in the figures may be the same or different. Which one is referred to as a first element and which is called a second element is arbitrary.
  • FIG. 1 is a block diagram of a computing device 100 that may be used with video chat data processing. The computing device 100 may be, for example, a laptop computer, desktop computer, ultrabook, tablet computer, mobile device, smart phone, smart TV, or server, among others. The computing device 100 may include a central processing unit (CPU) 102 that is configured to execute stored instructions, as well as a memory device 104 that stores instructions that are executable by the CPU 102. The CPU 102 may be coupled to the memory device 104 by a bus 106. Additionally, the CPU 102 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. Furthermore, the computing device 100 may include more than one CPU 102.
  • The computing device 100 may also include a graphics processing unit (GPU) 108. As shown, the CPU 102 may be coupled through the bus 106 to the GPU 108. The memory device 104 may store instructions that are executable by the GPU 108. The GPU 108 may be configured to perform any number of graphics operations within the computing device 100. For example, the GPU 108 may be configured to render or manipulate graphics data such as graphics images, graphics frames, videos, or the like, to be displayed to a user of the computing device 100. The graphics data may be rendered during a video chat session.
  • The memory device 104 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems. For example, the memory device 104 may include dynamic random access memory (DRAM). The memory device 104 includes one or more drivers 110. The drivers may include device drivers such as a graphics driver, a network driver, or a camera driver. In some examples, hardware components such as a graphics encoder may be operated by an encoding module of the graphics driver. Similarly, in some examples, a graphics decoder may be operated by a decoding module of the graphics driver. The computing device 100 also includes an image capture mechanism 112. In some embodiments, the image capture mechanism 112 is a camera, webcam, stereoscopic camera, scanner, infrared sensor, or the like. The image capture mechanism 112 may be used to capture graphics data, such as a video data stream or an image, during a video chat session.
  • The graphics driver, the network driver, the camera driver, or any combinations thereof, can detect that the video chat application has initialized or started a video chat session using hints. In some cases, a hint is any action that can indicate the start of a video chat session. The graphics, network, and camera drivers may each receive different hints based on the manner in which the video chat session was initialized by the video chat application. For example, the graphics driver may receive a call from the video chat application to create surfaces for rendering the video that is displayed during a video chat session. In such an example, the call received serves as a hint to the graphics driver of an impending video chat session. In another example, the camera driver may receive a notification or request for access to the camera functionality from the video chat application. In such an example, the notification or request serves as a hint to the camera driver of an impending video chat session. Furthermore, as an example, the network driver may detect data packets that are being sent or received by the video chat application. In such an example, the detection of data packets to or from the video chat application may serve as a hint to the network driver of an impending video chat session.
  • A user-mode module of each device driver may also be used to detect hints that indicate the start of a video chat session. The user-mode module of a device driver enables the device driver to execute in the user-mode space of a computing system, rather than in the privileged space of the kernel mode. By executing in the user-mode, the device drivers call an application programming interface (API) to access system hardware. In some embodiments, a user-mode module of the device driver may determine the identity of the application that is requesting service from the device driver. When the application requesting service is a video chat application, the device driver may use the request by the video chat application as a hint that a video chat session has been initialized.
  • When a hint that a video chat session has been initialized is detected, the encoding, decoding, and post-processing functionality can be transferred from the video chat application to the GPU hardware. Typically, the GPU hardware is specialized for media functions such as encoding, decoding, and post-processing. In some examples, the video codec format used by the video chat application may be implemented by the encoder hardware and the decoder hardware of the GPU subsystem. The specialized GPU hardware can provide faster, more efficient encoding, decoding, and post-processing functionality when compared to the encoding, decoding, and post-processing functionality of the video chat application. Typically, the video chat application implements such functionality using software, as discussed above. Accordingly, the performance of the video chat application may be improved by using the GPU hardware. Furthermore, the computing device can save power by using the GPU hardware for the encoding, decoding, and post-processing functionality of the video chat application.
  • The CPU 102 may also be connected through the bus 106 to an input/output (I/O) device interface 114 configured to connect the computing device 100 to one or more I/O devices 116. The I/O devices 116 may include, for example, a keyboard and a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others. The I/O devices 116 may be built-in components of the computing device 100, or may be devices that are externally connected to the computing device 100.
  • The CPU 102 may be linked through the bus 106 to a display interface 118 configured to connect the computing device 100 to one or more display devices 120. The display devices 120 may include a display screen that is a built-in component of the computing device 100. The display devices 120 may also include a computer monitor, television, or projector, among others, that is externally connected to the computing device 100.
  • The computing device also includes a storage device 122. The storage device 122 is a physical memory such as a hard drive, an optical drive, a thumbdrive, an array of drives, or any combinations thereof. The storage device 122 may also include remote storage drives. The storage device 122 also includes applications 124. The applications 124 may include a video chat application. The computing device 100 may also include a network interface controller (NIC) 126 configured to connect the computing device 100 through the bus 106 to a network 128. The network 128 may be a wide area network (WAN), local area network (LAN), or the Internet, among others. The data transmitted across a network is described as streaming data, wherein the streaming data includes data that is in packets according to a network protocol. The packet data includes, but is not limited to, image frames and corresponding audio data.
  • The block diagram of FIG. 1 is not intended to indicate that the computing device 100 is to include all of the components shown in FIG. 1. Further, the computing device 100 may include any number of additional components not shown in FIG. 1, depending on the details of the specific implementation. Moreover, the computing device 100 may be implemented as a system on chip (SOC). In an SOC implementation, various components of the computing device 100 are combined onto a single chip substrate.
  • In some embodiments, the video chat application may send a notification to the graphics driver to create a surface that is to render video processed by the video chat application. In some cases, a surface is an area of memory where graphics data is written. In examples, when using APIs such as those provided by DirectX, the surface created at the request of the video chat application is a render target surface, which may be located in an area of memory that is not managed by the GPU hardware. As a result, the GPU will track the surface created at the request of the video chat application by setting a flag for each surface. In examples, the flag may be stored by the graphics driver software. Additionally, in examples, a table may be used to indicate the allocated surfaces along with a Boolean value to indicate if the surface is tracked. During a render operation from the application, if the application asks a specific surface to be rendered and the graphics driver has a flag set against this surface, the graphics driver performs a video post processing on the surface that contains the decoded data and will not render the surface generated by the application. This process may be known as trapping the surface.
  • The graphics driver may also change the create surface notification sent by the video chat application from a render target surface to a video process render target surface. In this manner, the surfaces created by the video chat application are converted from a render target surface to a video process render target surface. Video process render target surfaces are managed by the GPU hardware. Further, in some embodiments, the video process render target surface may be located on the graphics card. The graphics driver may also create additional surfaces for decode render targets. The decode render targets may be used to send decoded graphics data to the post processing engine. The graphics driver can also inform the network driver and the camera driver that a video chat session has been initiated by the video chat application.
  • Even when the video chat application does not have any active video calls or conferences, the network driver may monitor the ports used for sending or receiving video chat data packets for any video chat application activity. If the network driver detects a packet from the video chat application, it marks the traffic from the video chat session as a means of tracking the video chat session. In some cases, the network driver may mark the traffic by using marker bits or any other marking mechanism. The network driver can detect a packet from the video chat application through examining the packet header or data inspection. The network driver may then inform the graphics driver and the camera driver that a video chat session has been initialized.
  • The manner in which the device drivers are notified of a new video chat session occurs according to how the video chat session is initialized by the video chat application, as well as the video chat features used by the video chat application during the video chat session. In a video chat session, computing systems may be designated as source computing systems and destination computing systems. In some cases, the source computing system is a computing system that is sending video chat data, while a destination computing system is a computing system that receives video chat data. Each computing system may be both a source computing system and a destination computing system simultaneously during the video chat session. However, for ease of description, each computing system is described as a source computing system or a destination computing system.
  • Consider the scenario where an image capture mechanism, such as a webcam, is absent from a source computing system. The source computing system may send video chat data to a destination computing system, where the video chat data includes audio and text. In this case, neither the graphics driver nor the camera driver will participate in the workflow at the source computing system, as an image capture mechanism is not present within the source computing system and the graphics hardware has no graphics data to process. Accordingly, the network driver of the source computing system may be the first driver to detect a new video chat session through the use of various hints. The network driver may notify the other drivers of the new video chat session. When no video stream is transmitted to the destination computing system due to the lack of a webcam, the graphics driver of the destination computing system does not enable decoding or post-processing functionality of the graphics hardware, as there is no graphics data to process. Accordingly, the manner in which the drivers are notified of an impending video chat session depends on the features of the video chat session.
  • FIG. 2 is a process flow diagram of a method 200 that enables video chat data processing at a destination computing system. In some examples, the video chat data may be formatted into data packets with various headers according to a networking protocol. Networking protocols include, but are not limited to, the Hyper Text Transfer Protocol (HTTP), the Transmission Control Protocol (TCP), the Internet Protocol (IP), the User Datagram Protocol (UDP), and the Real Time Communication (RTC) protocol. At block 202, the incoming data packets are pre-processed. Pre-processing the data packets includes ripping the video data from the data packet and sending the video data to the graphics driver for decoding. In some examples, the network driver may rip the video data stream from the incoming data stream. As discussed above, the network driver may detect a packet associated with the video chat application through port inspection or deep packet header inspection. In some embodiments, another module other than the network driver may be used to rip the video data stream from the incoming data stream.
  • At block 204, the video data stream is sent to the graphics subsystem while a null video stream is sent to the video chat application. In some embodiments, the network driver is used to route the video data stream from the video chat application to the graphics subsystem. The null video stream, sent to the video chat application, is a place holder for the actual video data that is processed by the graphics hardware. In some embodiments, a packet containing the audio data stream from the incoming data stream is sent to the video chat application along with the null video stream. The video chat application can recognize the audio data stream as an audio session and process the audio data stream so that the audio data stream can be rendered.
  • In some embodiments, the network driver may rip the video data onto a shared area or memory. This area of memory may be shared by the graphics hardware and the network driver. The network driver rips the video data to the shared memory, and then informs the graphics driver that there is video data in the shared memory that is waiting to be consumed by the graphics subsystem.
  • At block 206, the video data stream is processed using the graphics subsystem. In some embodiments, the graphics hardware receives the video data stream and then decodes the stream. The graphics hardware may also perform post-processing such as color conversion, scaling, and de-interlacing of the video data. Furthermore, the graphics subsystem can perform any additional image enhancement operations on the decoded video data stream, including, but not limited to, skin toning, and hue, saturation and brightness controlling. While the graphics subsystem performs post-processing of the decoded video data stream that was received from the network, the video chat application performs decoding and post-processing functions on the null video stream simultaneously. However, as the null stream is a placeholder, the packets sent to the video application do not contain any data. As a result, the no data processing is done by the video chat application. In this manner, the video chat application functions in the same manner as when the video data is sent to the video chat application.
  • At block 208, the processed video data is rendered. When the video chat application calls the graphics driver for rendering, the graphics driver traps the surfaces so that the placeholder data from the video chat application is not rendered onto the surface. The graphics driver renders the processed video data stream from the graphics subsystem onto the surface. In some cases, trapping the surface refers to the process by which the graphics driver prevents the video chat application from accessing the surfaces. The graphics driver may discard or ignore any data received from the video chat application for rendering. In some embodiments, the surfaces are trapped according to the flag that was set by the graphics driver when the video chat application sent a request to create the surfaces. The video chat application executes a render function to these surfaces, even when the video chat application receives a null video stream for processing, as the video chat application is unaware that it is not handling the decode and post processing of the video data stream. The video may be rendered to a display from the surface.
  • FIG. 3 is an illustration 300 showing a manner of decoding incoming video chat data at a destination computing system. Incoming data packets 302 from a network are received by network hardware 304. In some embodiments, a network driver 306 detects the video chat data received by the network hardware 304. Upon first detection of video chat data received by network hardware, the network driver 306 may inform the graphics driver 308 of the video chat session. The graphics driver, after receiving this notification, can track surfaces created by the video chat application 310. The network driver 306 may cause the network hardware 304 to separate the video data stream from the incoming data stream. The network hardware 304 may also forward the video data stream to a decode module 308A of the graphics driver 308. The network driver 306 also causes the network hardware 304 to send a null video stream to the video chat application 310.
  • The decode module 308A of the graphics driver 308 causes the graphics decode and post-processing hardware 312 to perform decoding and post-processing functions on the video data stream as described above. A render module 308B of the graphics driver 308 then causes the GPU Render Hardware 314 to render the decoded and post-processed data. The render module 308B of the graphics driver 308 is aware that the data sent by the video chat application at block 310 is “dummy” data. Any video data from the video chat application is ignored. In some embodiments, the render module 308B of the graphics driver 308 writes the decoded and post processed video data stream from the graphics decode and post-processing hardware 312 to a surface to be rendered.
  • FIG. 4 is a control flow diagram of a system 400 showing a manner of decoding incoming video chat data. Incoming data packets 302 are input to the system 400. The network driver 306 inspects the port where video data enters the system at reference number 402. At reference number 404, if the incoming data packet is a video chat packet, then a video chat decode session is created, and the data is sent to the graphics hardware. As shown at reference number 406, whenever a video chat packet is detected, the video data is extracted from each packet of data. At reference number 408, the extracted video packet is sent to the graphics hardware to be decoded. The decode module 308A of the graphics driver may operate the graphics hardware in order to decode the video data. Additionally, at reference number 410 the network driver instructs the network hardware to send a null video stream and the audio data extracted from the incoming data packet to the video chat application 310.
  • A decode session may be initiated at reference number 412. In the decode session, the incoming frames from the video data stream are continually decoded using the graphics hardware as indicated at reference number 414. At reference number 416, post-processing is performed on the video data frames. Accordingly, the graphics driver 308 may cause the graphics hardware to post-processing the decoded video data and render the data as discussed above. The video chat application 310 sends video data to be rendered at reference number 418. However, the video data from the video chat application 310 is not rendered, as the packets processed by the video chat application were null data packets, and the resulting video surface contains no information. The graphics driver 308 causes the decoded and post-processed frames received from the graphics hardware to be rendered.
  • FIG. 5 is a process flow diagram of a method 500 that enables video chat data processing at a source computing system. The device drivers may use hints to detect the start of a video chat session. In this manner, the drivers can re-route video chat data so that it may be processed by the graphics subsystem. In some examples, the camera driver may inform the graphics driver and the network driver that a video chat session has been started the camera driver detects a request for access from the video chat application.
  • At block 502, the video data is captured. In some embodiments, an image capture mechanism such as a camera is used to capture the video data. Moreover, audio data may be captured using a microphone. At block 504, the camera driver causes the image capture mechanism to send the captured video data to the graphics subsystem for encoding. Particularly, the captured video data may be sent to an encoder of the graphics subsystem. The camera driver also causes the image capture mechanism to send null video data to the video chat application. The audio captured by the microphone is also sent to the video chat application, so that the resulting packet from the video chat application includes the correct audio and null video. At block 506, the captured video data is processed using the graphics subsystem. In some embodiments, the encode module of the graphics driver uses the graphics hardware, such as an encoder, to encode the captured video data.
  • At block 508, the encoded data from the graphics subsystem is prepared for transmission across a network. In some embodiments, the encoded data is sent to the network driver. The network hardware can intercept the packet sent from the video chat application for transmission across the network, and repackage the packet by inserting the encoded video data from the graphics subsystem. Then the repackaged packet is then sent across the network according to the network protocol. In some embodiments, as the network hardware repackages the packet by inserting encoded data from the graphics subsystem, the network header information remains intact. For example, the network subsystem may use the packet header to keep track of the number of bytes that are sent in the packet. When the encoded video data from the graphics subsystem is inserted into the packet that is sent from the video chat application, the packet header may be modified to reflect the changes in packet size, thus maintaining accurate header information. Such packaging is performed in accordance with the underlying networking protocol. This will ensure the packet that is transmitted will not be rejected when it is received.
  • In some embodiments, the render module of the graphics driver causes the graphics subsystem to render the captured video data alongside the received video data within the video chat application of a source computing system. For example, a video chat application may render the received video chat data in a larger portion of the video chat application, while rendering the source video data in a smaller portion of the video chat application so that a user can see their own image. In such a scenario, the graphics subsystem re-uses the raw video stream forwarded by the image capture mechanism when it is called for presenting this source data. In some embodiments, color space conversion and scaling may be performed by the graphics hardware before the captured source data is rendered on the same computing system.
  • FIG. 6 is an illustration 600 showing a manner of encoding captured video chat data at a source computing system. The camera software 602 is used to control the capture of raw video data using the camera driver 604 and the camera hardware 606. The camera software 602 sends null video data to the video chat application 608. The camera driver 604 causes the captured video data to be sent from the camera hardware 606 to the encoding hardware 612 of the graphics subsystem. The camera driver 604 may also inform the encode module 610A of the graphics driver and the render module 610B of the graphics driver that a video chat session has started.
  • The encode module 610A of the graphics driver communicates with the network driver 614 so that the packet received from the video chat application can be repackaged. The encoded video data is sent to the network hardware 616 from the encoding hardware 612 of the graphics subsystem. The network driver 614 causes the packet from the video chat application 608 to be combined with the encoded video data from the encoding hardware 612 of the graphics subsystem. In some embodiments, a multiplexer is used to combine the encoded video data from the encoding hardware 612 of the graphics subsystem with the data packet from the video chat application 608 that includes audio and null video. The multiplexer may be implemented in hardware or software. The network hardware 616 is used to transmit the repackaged data packet across the network.
  • The camera driver 604 also causes the captured video data from the camera hardware 606 to be sent to the GPU render hardware 618 of the GPU subsystem by communicating with the render module 610B of the graphics driver. The render module 610B may cause the render hardware 618 of the graphics subsystem to render video data captured by the camera hardware 606, thereby rendering the captured video data at the same computing system. In some embodiments, the video chat application 608 renders the source video data alongside the video and audio data received from a remote computing system.
  • FIG. 7 is a control flow diagram of a system 700 showing a manner of encoding captured video chat data. The video chat application captures video frames at reference number 702 by synchronizing with the camera module 602. In some embodiments, the synchronization includes requesting raw video data from the camera module 602. The camera module 602 notifies the network driver 614 of the new video chat session as indicated at reference number 704. The camera module 602 also notifies the encode module 610A of the graphics driver of the new video chat session as indicated at reference number 706. The camera module 602 then sends null video frames to the video chat application 608 as indicated at reference number 708. The captured video data is sent from the camera hardware to the encode module 610A of the graphics driver as indicated at reference number 710. The encode module 610A of the graphics driver initializes the encode hardware of the graphics subsystem as indicated at reference number 712. The encode hardware of the graphics subsystem then encodes the incoming video data as indicated at reference number 714. The encode module 610A of the graphics driver then causes the encoded video data to be sent to the network hardware as indicated at reference number 716. The corresponding audio data is also sent from the video chat application 608 to the network hardware in a packet as indicated at reference number 718. The network driver 614 initializes the port used to transmit the video chat data as indicated at reference number 720. The encoded data may be stored as indicated at reference number 722. The encoded data is then repackaged into the packet with the audio data from the video chat application as shown at reference number 724. The repackaged packet may be transmitted across a network at reference number 726.
  • FIG. 8 is a block diagram of an exemplary system 800 that processes video chat data. Like numbered items are as described with respect to FIG. 1. In some embodiments, the system 800 is a media system. In addition, the system 800 may be incorporated into a personal computer (PC), laptop computer, ultra-laptop computer, server computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, television, smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, a printing device, an embedded device or the like.
  • In various embodiments, the system 800 comprises a platform 802 coupled to a display 804. The platform 802 may receive content from a content device, such as content services device(s) 806 or content delivery device(s) 808, or other similar content sources. A navigation controller 810 including one or more navigation features may be used to interact with, for example, the platform 802 and/or the display 804. Each of these components is described in more detail below.
  • The platform 802 may include any combination of a chipset 812, a central processing unit (CPU) 102, a memory device 104, a storage device 122, a graphics subsystem 814, applications 820, and a radio 816. The chipset 812 may provide intercommunication among the CPU 102, the memory device 104, the storage device 122, the graphics subsystem 814, the applications 820, and the radio 816. For example, the chipset 812 may include a storage adapter (not shown) capable of providing intercommunication with the storage device 122. The applications 820 may be the applications 114, the applications 202, or the applications 502 as described above. The components of the system 800 may be implemented as a system on chip (SOC). In an SOC implementation, all components of the platform 802 are combined onto a single chip substrate.
  • The CPU 102 may be implemented as Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU). In some embodiments, the CPU 102 includes multi-core processor(s), multi-core mobile processor(s), or the like. The memory device 104 may be implemented as a volatile memory device such as, but not limited to, a Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), or Static RAM (SRAM). The storage device 122 may be implemented as a non-volatile storage device such as, but not limited to, a magnetic disk drive, optical disk drive, tape drive, solid state drive, an internal storage device, an attached storage device, flash memory, battery backed-up SDRAM (synchronous DRAM), and/or a network accessible storage device. In some embodiments, the storage device 122 includes technology to increase the storage performance enhanced protection for valuable digital media when multiple hard drives are included, for example.
  • The graphics subsystem 814 may perform processing of images such as still or video for display. The graphics subsystem 814 may include a graphics processing unit (GPU), such as the GPU 108, or a visual processing unit (VPU), for example. An analog or digital interface may be used to communicatively couple the graphics subsystem 814 and the display 804. For example, the interface may be any of a High-Definition Multimedia Interface, DisplayPort, wireless HDMI, and/or wireless HD compliant techniques. The graphics subsystem 814 may be integrated into the CPU 102 or the chipset 812. Alternatively, the graphics subsystem 814 may be a stand-alone card communicatively coupled to the chipset 812.
  • The graphics and/or video processing techniques described herein may be implemented in various hardware architectures. For example, graphics and/or video functionality may be integrated within the chipset 812. Alternatively, a discrete graphics and/or video processor may be used. As still another embodiment, the graphics and/or video functions may be implemented by a general purpose processor, including a multi-core processor. In a further embodiment, the functions may be implemented in a consumer electronics device.
  • The radio 816 may include one or more radios capable of transmitting and receiving signals using various suitable wireless communications techniques. Such techniques may involve communications across one or more wireless networks. Exemplary wireless networks include wireless local area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area network (WMANs), cellular networks, satellite networks, or the like. In communicating across such networks, the radio 816 may operate in accordance with one or more applicable standards in any version.
  • The display 804 may include any television type monitor or display. For example, the display 804 may include a computer display screen, touch screen display, video monitor, television, or the like. The display 804 may be digital and/or analog. In some embodiments, the display 804 is a holographic display. Also, the display 804 may be a transparent surface that may receive a visual projection. Such projections may convey various forms of information, images, objects, or the like. For example, such projections may be a visual overlay for a mobile augmented reality (MAR) application. Under the control of one or more applications 820, the platform 802 may display a user interface 818 on the display 804.
  • The content services device(s) 806 may be hosted by any national, international, or independent service and, thus, may be accessible to the platform 802 via the Internet, for example. The content services device(s) 806 may be coupled to the platform 802 and/or to the display 804. The platform 802 and/or the content services device(s) 806 may be coupled to a network 124 to communicate (e.g., send and/or receive) media information to and from the network 124. The content delivery device(s) 808 also may be coupled to the platform 802 and/or to the display 804.
  • The content services device(s) 806 may include a cable television box, personal computer, network, telephone, or Internet-enabled device capable of delivering digital information. In addition, the content services device(s) 806 may include any other similar devices capable of unidirectionally or bidirectionally communicating content between content providers and the platform 802 or the display 804, via the network 124 or directly. It will be appreciated that the content may be communicated unidirectionally and/or bidirectionally to and from any one of the components in the system 800 and a content provider via the network 124. Examples of content may include any media information including, for example, video, music, medical and gaming information, and so forth.
  • The content services device(s) 806 may receive content such as cable television programming including media information, digital information, or other content. Examples of content providers may include any cable or satellite television or radio or Internet content providers, among others.
  • In some embodiments, the platform 802 receives control signals from the navigation controller 810, which includes one or more navigation features. The navigation features of the navigation controller 810 may be used to interact with the user interface 818, for example. The navigation controller 810 may be a pointing device or a touchscreen device that may be a computer hardware component (specifically human interface device) that allows a user to input spatial (e.g., continuous and multi-dimensional) data into a computer. Many systems such as graphical user interfaces (GUI), and televisions and monitors allow the user to control and provide data to the computer or television using physical gestures. Physical gestures include but are not limited to facial expressions, facial movements, movement of various limbs, body movements, body language or any combinations thereof. Such physical gestures can be recognized and translated into commands or instructions.
  • Movements of the navigation features of the navigation controller 810 may be echoed on the display 804 by movements of a pointer, cursor, focus ring, or other visual indicators displayed on the display 804. For example, under the control of the applications 820, the navigation features located on the navigation controller 810 may be mapped to virtual navigation features displayed on the user interface 818. In some embodiments, the navigation controller 810 may not be a separate component but, rather, may be integrated into the platform 802 and/or the display 804.
  • The system 800 may include drivers (not shown) that include technology to enable users to instantly turn on and off the platform 802 with the touch of a button after initial boot-up, when enabled, for example. Program logic may allow the platform 802 to stream content to media adaptors or other content services device(s) 806 or content delivery device(s) 808 when the platform is turned “off.” In addition, the chipset 812 may include hardware and/or software support for 6.1 surround sound audio and/or high definition 7.1 surround sound audio, for example. The drivers may include a graphics driver for integrated graphics platforms. In some embodiments, the graphics driver includes a peripheral component interconnect express (PCIe) graphics card.
  • In various embodiments, any one or more of the components shown in the system 800 may be integrated. For example, the platform 802 and the content services device(s) 806 may be integrated; the platform 802 and the content delivery device(s) 808 may be integrated; or the platform 802, the content services device(s) 806, and the content delivery device(s) 808 may be integrated. In some embodiments, the platform 802 and the display 804 are an integrated unit. The display 804 and the content service device(s) 806 may be integrated, or the display 804 and the content delivery device(s) 808 may be integrated, for example.
  • The system 800 may be implemented as a wireless system or a wired system. When implemented as a wireless system, the system 800 may include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media may include portions of a wireless spectrum, such as the RF spectrum. When implemented as a wired system, the system 800 may include components and interfaces suitable for communicating over wired communications media, such as input/output (I/O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, or the like. Examples of wired communications media may include a wire, cable, metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, or the like.
  • The platform 802 may establish one or more logical or physical channels to communicate information. The information may include media information and control information. Media information may refer to any data representing content meant for a user. Examples of content may include, for example, data from a voice conversation, videoconference, streaming video, electronic mail (email) message, voice mail message, alphanumeric symbols, graphics, image, video, text, and the like. Data from a voice conversation may be, for example, speech information, silence periods, background noise, comfort noise, tones, and the like. Control information may refer to any data representing commands, instructions or control words meant for an automated system. For example, control information may be used to route media information through a system, or instruct a node to process the media information in a predetermined manner. The embodiments, however, are not limited to the elements or the context shown or described in FIG. 8.
  • FIG. 9 is a schematic of a small form factor device 900 in which the system 800 of FIG. 8 may be embodied. Like numbered items are as described with respect to FIG. 8. In some embodiments, for example, the device 900 is implemented as a mobile computing device having wireless capabilities. A mobile computing device may refer to any device having a processing system and a mobile power source or supply, such as one or more batteries, for example.
  • As described above, examples of a mobile computing device may include a personal computer (PC), laptop computer, ultra-laptop computer, server computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, television, smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, and the like.
  • An example of a mobile computing device may also include a computer that is arranged to be worn by a person, such as a wrist computer, finger computer, ring computer, eyeglass computer, belt-clip computer, arm-band computer, shoe computer, clothing computer, or any other suitable type of wearable computer. For example, the mobile computing device may be implemented as a smart phone capable of executing computer applications, as well as voice communications and/or data communications. Although some embodiments may be described with a mobile computing device implemented as a smart phone by way of example, it may be appreciated that other embodiments may be implemented using other wired or wireless mobile computing devices as well.
  • As shown in FIG. 9, the device 900 may include a housing 902, a display 904, an input/output (I/O) device 906, and an antenna 908. The device 900 may also include navigation features 912. The display 904 may include any suitable display 910 unit for displaying information appropriate for a mobile computing device. The I/O device 906 may include any suitable I/O device for entering information into a mobile computing device. For example, the I/O device 906 may include an alphanumeric keyboard, a numeric keypad, a touch pad, input keys, buttons, switches, rocker switches, microphones, speakers, a voice recognition device and software, or the like. Information may also be entered into the device 900 by way of microphone. Such information may be digitized by a voice recognition device.
  • FIG. 10 is a block diagram showing tangible, non-transitory computer-readable media 1000 that stores code for a video chat data processing. The tangible, non-transitory computer-readable media 1000 may be accessed by a processor 1002 over a computer bus 1004. Furthermore, the tangible, non-transitory computer-readable medium 1000 may include code configured to direct the processor 1002 to perform the methods described herein.
  • The various software components discussed herein may be stored on one or more tangible, non-transitory computer-readable media 1000, as indicated in FIG. 10. For example, an encode module 1006 may be configured to cause the graphics subsystem to encode image data. A decode module 1008 may be configured to cause the graphics subsystem to decode incoming data packets received from a network. A post-processing module may be configured to cause the graphics subsystem to perform post-processing of the decoded video data. Further, a render module may be used to cause the graphics subsystem to render video data.
  • The block diagram of FIG. 10 is not intended to indicate that the tangible, non-transitory computer-readable media 1000 is to include all of the components shown in FIG. 10. Further, the tangible, non-transitory computer-readable media 1000 may include any number of additional components not shown in FIG. 10, depending on the details of the specific implementation.
  • Example 1
  • A graphics processing unit is described herein. The graphics processing unit includes a decoder. The decoder is to decode a video data stream from an incoming data stream. The graphics processing unit also includes a post processor. The post processor is to perform post-processing of the decoded video data stream. Additionally, the graphics processing unit includes a renderer. The renderer is to render the post processed video data stream and discard a null video data stream from a video chat application during a video chat session.
  • A device driver that may detect the video chat session so that the video data stream from the incoming data is sent to the decoder, and the null video stream is sent to the video chat application. The incoming data stream may include the video data stream and an audio data stream, and the video chat application may receive the audio data stream and the null video data stream. The decoder may decode the video data stream according to a codec of the video chat application. Additionally, post processing the decoded data stream may include any image enhancements to the video data stream. Further, hints may be used to detect a new video chat session.
  • Example 2
  • A system is described herein. The system includes an encoder, and the encoder is to encode image data. The system also includes a multiplexer, and the multiplexer is to repackage the encoded image data with a data packet from a video chat application during a video chat session. Additionally, the system includes networking logic, and the networking logic is to transmit the repackaged data packet across a network.
  • The encoder may encode the video data stream according to a codec of the video chat application. An image capture device may capture image data and send the image data to the encoder. The image capture device may also send null image data to the video chat application. The data packet from the video chat application may include null image data and audio data from the video chat session. A renderer may render the image data using the video chat application during the video chat session. Hints may be used to detect a new video chat session.
  • Example 3
  • A system is described herein. The system includes a display, a radio, and a memory that is to store instructions and that is communicatively coupled to the display. The system also includes a processor communicatively coupled to the radio and the memory, wherein when the processor is to execute the instructions, the processor is to detect a video chat session by an application. The processor also encodes image data that is to be transmitted across a network, wherein a graphics subsystem encodes the image data from an image capture device and the application receives null image data from an image capture device. Additionally, the processor decodes incoming data packets received from the network, wherein networking logic rips the encoded video data from the incoming data packets and sends the encoded video data to the graphics subsystem to be decoded, and the networking logic sends null video data to the application. The processor may be a graphics processing unit. Additionally, the system may be a system on chip. Further, the encode and decode functions may be performed by the processor instead of the application.
  • Example 4
  • A tangible, non-transitory, computer-readable medium comprising code to direct a processor is described herein. The code may direct the processor to encode image data that is to be transmitted across a network, wherein the processor encodes the image data instead of a video chat application. The code may also direct the processor to decode incoming data packets received from the network, wherein encoded image data is ripped from the incoming data packets and sent to the processor to be decoded.
  • The image data may be received from an image capture device. Additionally, the video chat application may encode null video data when the processor encodes the image data. The video chat application may receive null data packets when the processor receives encoded image data.
  • It is to be understood that specifics in the aforementioned examples may be used anywhere in one or more embodiments. For instance, all optional features of the computing device described above may also be implemented with respect to either of the methods or the computer-readable medium described herein. Furthermore, although flow diagrams and/or state diagrams may have been used herein to describe embodiments, the present techniques are not limited to those diagrams or to corresponding descriptions herein. For example, flow need not move through each illustrated box or state or in exactly the same order as illustrated and described herein.
  • The present techniques are not restricted to the particular details listed herein. Indeed, those skilled in the art having the benefit of this disclosure will appreciate that many other variations from the foregoing description and drawings may be made within the scope of the present techniques. Accordingly, it is the following claims including any amendments thereto that define the scope of the present techniques.

Claims (22)

What is claimed is:
1. A graphics processing unit, comprising:
a decoder, wherein the decoder is to decode a video data stream from an incoming data stream;
a post processor, wherein the post processor is to perform post-processing of the decoded video data stream; and
a renderer, wherein the renderer is to render the post processed video data stream and discard a null video data stream from a video chat application during a video chat session.
2. The graphics processing unit of claim 1, comprising a device driver that is to detect the video chat session so that the video data stream from the incoming data is sent to the decoder, and the null video stream is sent to the video chat application.
3. The graphics processing unit of claim 1, wherein the incoming data stream is to include the video data stream and an audio data stream.
4. The graphics processing unit of claim 3, wherein the video chat application is to receive the audio data stream and the null video data stream.
5. The graphics processing unit of claim 1, wherein the decoder is to decode the video data stream according to a codec of the video chat application.
6. The graphics processing unit of claim 1, wherein post processing the decoded data stream includes any image enhancements to the video data stream.
7. The graphics processing unit of claim 1, wherein hints are used to detect a new video chat session.
8. A system, comprising:
an encoder, wherein the encoder is to encode image data;
a multiplexer, wherein the multiplexer is to repackage the encoded image data with a data packet from a video chat application during a video chat session; and
networking logic to transmit the repackaged data packet across a network.
9. The system of claim 8, the encoder is to encode the video data stream according to a codec of the video chat application.
10. The system of claim 8, wherein an image capture device is to capture image data and send the image data to the encoder.
11. The system of claim 10, wherein the image capture device is to send null image data to the video chat application.
12. The system of claim 8, wherein the data packet from the video chat application includes null image data and audio data from the video chat session.
13. The system of claim 8 comprising a renderer, wherein the renderer is to render the image data using the video chat application during the video chat session.
14. The system of claim 8, wherein hints are used to detect a new video chat session.
15. A system, comprising:
a display;
a radio;
a memory that is to store instructions and that is communicatively coupled to the display; and
a processor communicatively coupled to the radio and the memory, wherein when the processor is to execute the instructions, the processor is to:
detect a video chat session by an application;
encode image data that is to be transmitted across a network, wherein a graphics subsystem encodes the image data from an image capture device and the application receives null image data from an image capture device; and
decode incoming data packets received from the network, wherein networking logic rips the encoded video data from the incoming data packets and sends the encoded video data to the graphics subsystem to be decoded, and the networking logic sends null video data to the application.
16. The system of claim 15, wherein the processor is a graphics processing unit.
17. The system of claim 15, wherein the system is a system on chip.
18. The system of claim 15, wherein encode and decode functions are performed by the processor instead of the application.
19. A tangible, non-transitory, computer-readable medium comprising code to direct a processor to:
encode image data that is to be transmitted across a network, wherein the processor encodes the image data instead of a video chat application; and
decode incoming data packets received from the network, wherein encoded image data is ripped from the incoming data packets and sent to the processor to be decoded.
20. The computer readable medium of claim 15, wherein the image data is received from an image capture device.
21. The computer readable medium of claim 15, wherein the video chat application encodes null video data when the processor encodes the image data.
22. The computer readable medium of claim 15, wherein the video chat application receives null data packets when the processor receives encoded image data.
US13/940,883 2013-07-12 2013-07-12 Video chat data processing Expired - Fee Related US9232177B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US13/940,883 US9232177B2 (en) 2013-07-12 2013-07-12 Video chat data processing
JP2014139916A JP5879609B2 (en) 2013-07-12 2014-07-07 Video chat data processing
EP14176712.9A EP2824930A1 (en) 2013-07-12 2014-07-11 Video chat data processing
CN201410331139.4A CN104284129B (en) 2013-07-12 2014-07-11 Video chat data processing
JP2015257418A JP6322834B2 (en) 2013-07-12 2015-12-28 Video chat data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/940,883 US9232177B2 (en) 2013-07-12 2013-07-12 Video chat data processing

Publications (2)

Publication Number Publication Date
US20150015663A1 true US20150015663A1 (en) 2015-01-15
US9232177B2 US9232177B2 (en) 2016-01-05

Family

ID=51178747

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/940,883 Expired - Fee Related US9232177B2 (en) 2013-07-12 2013-07-12 Video chat data processing

Country Status (4)

Country Link
US (1) US9232177B2 (en)
EP (1) EP2824930A1 (en)
JP (2) JP5879609B2 (en)
CN (1) CN104284129B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104768063A (en) * 2015-04-07 2015-07-08 天脉聚源(北京)教育科技有限公司 Video coding method and device
US20150228106A1 (en) * 2014-02-13 2015-08-13 Vixs Systems Inc. Low latency video texture mapping via tight integration of codec engine with 3d graphics engine
US20180007115A1 (en) * 2016-07-01 2018-01-04 Cisco Technology, Inc. Fog enabled telemetry embedded in real time multimedia applications
CN108289185A (en) * 2017-01-09 2018-07-17 腾讯科技(深圳)有限公司 A kind of video communication method, device and terminal device
CN111741343A (en) * 2020-06-17 2020-10-02 咪咕视讯科技有限公司 Video processing method and device and electronic equipment
US20220021752A1 (en) * 2020-04-29 2022-01-20 Citrix Systems, Inc. Image acquisition device virtualization for remote computing

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102651126B1 (en) 2016-11-28 2024-03-26 삼성전자주식회사 Graphic processing apparatus and method for processing texture in graphics pipeline
CN110959286A (en) * 2017-07-31 2020-04-03 索尼公司 Image processing apparatus, image processing method, program, and remote communication system
CN111447439B (en) * 2020-05-18 2022-08-09 Oppo(重庆)智能科技有限公司 Image coding method, image coding device and mobile terminal
US20220200996A1 (en) * 2020-12-23 2022-06-23 Acronis International Gmbh Systems and methods for protecting web conferences from intruders
CN114860440B (en) * 2022-04-29 2023-01-10 北京天融信网络安全技术有限公司 GPU (graphics processing Unit) video memory management method and device

Citations (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5602992A (en) * 1993-11-29 1997-02-11 Intel Corporation System for synchronizing data stream transferred from server to client by initializing clock when first packet is received and comparing packet time information with clock
US20030195998A1 (en) * 2002-04-15 2003-10-16 Estrop Stephen J. Facilitating interaction between video renderers and graphics device drivers
US6677976B2 (en) * 2001-10-16 2004-01-13 Sprint Communications Company, LP Integration of video telephony with chat and instant messaging environments
US20040075745A1 (en) * 2002-10-22 2004-04-22 Daniel Mance System and method for generating and processing a stream of video data having multiple data streams
US6742188B1 (en) * 1997-02-04 2004-05-25 Microsoft Corporation Method and system for encoding data in the horizontal overscan portion of a video signal
US20040181796A1 (en) * 2003-03-12 2004-09-16 Oracle International Corporation Real-time collaboration client
US6829303B1 (en) * 1999-11-17 2004-12-07 Hitachi America, Ltd. Methods and apparatus for decoding images using dedicated hardware circuitry and a programmable processor
US20050058307A1 (en) * 2003-07-12 2005-03-17 Samsung Electronics Co., Ltd. Method and apparatus for constructing audio stream for mixing, and information storage medium
US20060007958A1 (en) * 2004-07-12 2006-01-12 Samsung Electronics Co., Ltd. Multiplexing method and apparatus to generate transport stream
US20060023073A1 (en) * 2004-07-27 2006-02-02 Microsoft Corporation System and method for interactive multi-view video
US20060271971A1 (en) * 2003-06-13 2006-11-30 Jonathan Peter Vincent Drazin Interactive television system
US20070065122A1 (en) * 2002-05-24 2007-03-22 Digeo, Inc. System and method for digital multimedia stream conversion
US7248590B1 (en) * 2003-02-18 2007-07-24 Cisco Technology, Inc. Methods and apparatus for transmitting video streams on a packet network
US7458894B2 (en) * 2004-09-15 2008-12-02 Microsoft Corporation Online gaming spectator system
US20090028247A1 (en) * 2007-07-02 2009-01-29 Lg Electronics Inc. Digital broadcasting system and data processing method
US20090092234A1 (en) * 2007-10-05 2009-04-09 Apple Inc. Answering video chat requests
US20090144425A1 (en) * 2007-12-04 2009-06-04 Sony Computer Entertainment Inc. Network bandwidth detection, distribution and traffic prioritization
US20090184977A1 (en) * 2008-01-18 2009-07-23 Qualcomm Incorporated Multi-format support for surface creation in a graphics processing system
US20090315886A1 (en) * 2008-06-19 2009-12-24 Honeywell International Inc. Method to prevent resource exhaustion while performing video rendering
US20100056273A1 (en) * 2008-09-04 2010-03-04 Microsoft Corporation Extensible system for customized avatars and accessories
US20100161825A1 (en) * 2008-12-22 2010-06-24 David Randall Ronca On-device multiplexing of streaming media content
US7808988B2 (en) * 2006-02-10 2010-10-05 Packet Video Corporation System and method for connecting mobile devices
US20100284472A1 (en) * 2009-05-11 2010-11-11 Mstar Semiconductor, Inc. Method for Reconstructing Digital Video Data Stream and Apparatus Thereof
US20100316133A1 (en) * 2009-06-10 2010-12-16 Yasutomo Matsuba Detection of Resynchronization Markers When Decoding an MPEG-4 Bitstream
US20110102672A1 (en) * 2002-04-15 2011-05-05 Microsoft Corporation Closing a Video Stream Object
US20110216708A1 (en) * 2005-08-24 2011-09-08 Qualcomm Incorporated Transmission of multiplex protocol data units in physical layer packets
US20110249077A1 (en) * 2010-04-07 2011-10-13 Abuan Joe S Video Conference Network Management for a Mobile Device
US20120092445A1 (en) * 2010-10-14 2012-04-19 Microsoft Corporation Automatically tracking user movement in a video chat application
US20120274808A1 (en) * 2011-04-26 2012-11-01 Sheaufoong Chong Image overlay in a mobile device
US20120287344A1 (en) * 2011-05-13 2012-11-15 Hoon Choi Audio and video data multiplexing for multimedia stream switch
US8358379B1 (en) * 2009-07-31 2013-01-22 Pixelworks, Inc. Post processing displays with on-screen displays
US20130150161A1 (en) * 2011-12-13 2013-06-13 Empire Technology Development, Llc Graphics render matching for displays
US8533166B1 (en) * 2010-08-20 2013-09-10 Brevity Ventures LLC Methods and systems for encoding/decoding files and transmission thereof
US8542265B1 (en) * 2012-10-26 2013-09-24 Google, Inc Video chat encoding pipeline
US20130293664A1 (en) * 2012-05-02 2013-11-07 Research In Motion Limited Systems and Methods to Manage Video Chat Contacts
US20130328997A1 (en) * 2012-06-08 2013-12-12 Samsung Electronics Co., Ltd Multiple channel communication using multiple cameras
US20140002576A1 (en) * 2012-06-28 2014-01-02 Microsoft Corporation Cross-Process Media Handling in a Voice-Over-Internet Protocol (VOIP) Application Platform
US20140006977A1 (en) * 2012-03-30 2014-01-02 Karriem Lateff Adams Integrated social network internet operating system and management interface
US20140043358A1 (en) * 2012-08-07 2014-02-13 Intel Corporation Media encoding using changed regions
US20140082638A1 (en) * 2012-09-14 2014-03-20 Kuo Chung GAN Multi-user computer system
US20140078265A1 (en) * 2011-05-19 2014-03-20 Sony Computer Entertainment Inc. Moving picture capturing device, information processing system, information processing device, and image data processing method
US8681228B2 (en) * 1997-09-10 2014-03-25 Ricoh Company, Ltd. System and method for displaying an image indicating a positional relation between partially overlapping images
US20140092203A1 (en) * 2010-05-12 2014-04-03 Blue Jeans Network, Inc. Systems and Methods for Scalable Composition of Media Streams for Real-Time Multimedia Communication
US20140098185A1 (en) * 2012-10-09 2014-04-10 Shahram Davari Interactive user selected video/audio views by real time stitching and selective delivery of multiple video/audio sources
US20140176548A1 (en) * 2012-12-21 2014-06-26 Nvidia Corporation Facial image enhancement for video communication
US8803991B2 (en) * 2011-10-12 2014-08-12 Cisco Technology, Inc. Snapshot capture in video stream
US20140269401A1 (en) * 2013-03-14 2014-09-18 General Instrument Corporation Passive measurement of available link bandwidth

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08195947A (en) * 1994-11-19 1996-07-30 Casio Comput Co Ltd Video telephone device, data transmitting method and video telephone system
JP3079081B2 (en) * 1998-03-30 2000-08-21 沖電気工業株式会社 Teletext transmission system for MPEG2 system
JP2002142192A (en) * 2000-11-01 2002-05-17 Sony Corp Apparatus and method for signal processing and for recording
JP2004289688A (en) * 2003-03-24 2004-10-14 Nec Corp Television telephone apparatus
US8027335B2 (en) 2004-05-05 2011-09-27 Prodea Systems, Inc. Multimedia access device and system employing the same
KR100773508B1 (en) 2005-09-23 2007-11-06 엘지전자 주식회사 Method for call setup of mobile communication terminal
US8233527B2 (en) 2007-05-11 2012-07-31 Advanced Micro Devices, Inc. Software video transcoder with GPU acceleration
US8605779B2 (en) * 2007-06-20 2013-12-10 Microsoft Corporation Mechanisms to conceal real time video artifacts caused by frame loss
CN101262610B (en) * 2008-03-27 2011-05-18 复旦大学 A playing system for AVS-TS programs at portable terminal
WO2009126253A1 (en) * 2008-04-11 2009-10-15 Thomson Licensing Staggercasting with temporal scalability
JP5521403B2 (en) * 2009-06-23 2014-06-11 ソニー株式会社 Information processing apparatus, resource management method, and program
CN102215217B (en) * 2010-04-07 2014-09-17 苹果公司 Establishing a video conference during a phone call
CN102378067B (en) * 2011-11-21 2013-10-02 武汉大学 Robustness mobile video decoding method

Patent Citations (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5602992A (en) * 1993-11-29 1997-02-11 Intel Corporation System for synchronizing data stream transferred from server to client by initializing clock when first packet is received and comparing packet time information with clock
US6742188B1 (en) * 1997-02-04 2004-05-25 Microsoft Corporation Method and system for encoding data in the horizontal overscan portion of a video signal
US8681228B2 (en) * 1997-09-10 2014-03-25 Ricoh Company, Ltd. System and method for displaying an image indicating a positional relation between partially overlapping images
US6829303B1 (en) * 1999-11-17 2004-12-07 Hitachi America, Ltd. Methods and apparatus for decoding images using dedicated hardware circuitry and a programmable processor
US6677976B2 (en) * 2001-10-16 2004-01-13 Sprint Communications Company, LP Integration of video telephony with chat and instant messaging environments
US20030195998A1 (en) * 2002-04-15 2003-10-16 Estrop Stephen J. Facilitating interaction between video renderers and graphics device drivers
US20110102672A1 (en) * 2002-04-15 2011-05-05 Microsoft Corporation Closing a Video Stream Object
US20070065122A1 (en) * 2002-05-24 2007-03-22 Digeo, Inc. System and method for digital multimedia stream conversion
US20040075745A1 (en) * 2002-10-22 2004-04-22 Daniel Mance System and method for generating and processing a stream of video data having multiple data streams
US7248590B1 (en) * 2003-02-18 2007-07-24 Cisco Technology, Inc. Methods and apparatus for transmitting video streams on a packet network
US20040181796A1 (en) * 2003-03-12 2004-09-16 Oracle International Corporation Real-time collaboration client
US20060271971A1 (en) * 2003-06-13 2006-11-30 Jonathan Peter Vincent Drazin Interactive television system
US20050058307A1 (en) * 2003-07-12 2005-03-17 Samsung Electronics Co., Ltd. Method and apparatus for constructing audio stream for mixing, and information storage medium
US20060007958A1 (en) * 2004-07-12 2006-01-12 Samsung Electronics Co., Ltd. Multiplexing method and apparatus to generate transport stream
US20060023073A1 (en) * 2004-07-27 2006-02-02 Microsoft Corporation System and method for interactive multi-view video
US7458894B2 (en) * 2004-09-15 2008-12-02 Microsoft Corporation Online gaming spectator system
US20110216708A1 (en) * 2005-08-24 2011-09-08 Qualcomm Incorporated Transmission of multiplex protocol data units in physical layer packets
US7808988B2 (en) * 2006-02-10 2010-10-05 Packet Video Corporation System and method for connecting mobile devices
US20090028247A1 (en) * 2007-07-02 2009-01-29 Lg Electronics Inc. Digital broadcasting system and data processing method
US20090092234A1 (en) * 2007-10-05 2009-04-09 Apple Inc. Answering video chat requests
US20090144425A1 (en) * 2007-12-04 2009-06-04 Sony Computer Entertainment Inc. Network bandwidth detection, distribution and traffic prioritization
US20090184977A1 (en) * 2008-01-18 2009-07-23 Qualcomm Incorporated Multi-format support for surface creation in a graphics processing system
US20090315886A1 (en) * 2008-06-19 2009-12-24 Honeywell International Inc. Method to prevent resource exhaustion while performing video rendering
US20100056273A1 (en) * 2008-09-04 2010-03-04 Microsoft Corporation Extensible system for customized avatars and accessories
US20100161825A1 (en) * 2008-12-22 2010-06-24 David Randall Ronca On-device multiplexing of streaming media content
US20100284472A1 (en) * 2009-05-11 2010-11-11 Mstar Semiconductor, Inc. Method for Reconstructing Digital Video Data Stream and Apparatus Thereof
US20100316133A1 (en) * 2009-06-10 2010-12-16 Yasutomo Matsuba Detection of Resynchronization Markers When Decoding an MPEG-4 Bitstream
US8358379B1 (en) * 2009-07-31 2013-01-22 Pixelworks, Inc. Post processing displays with on-screen displays
US20110249077A1 (en) * 2010-04-07 2011-10-13 Abuan Joe S Video Conference Network Management for a Mobile Device
US20140092203A1 (en) * 2010-05-12 2014-04-03 Blue Jeans Network, Inc. Systems and Methods for Scalable Composition of Media Streams for Real-Time Multimedia Communication
US8533166B1 (en) * 2010-08-20 2013-09-10 Brevity Ventures LLC Methods and systems for encoding/decoding files and transmission thereof
US20120092445A1 (en) * 2010-10-14 2012-04-19 Microsoft Corporation Automatically tracking user movement in a video chat application
US20120274808A1 (en) * 2011-04-26 2012-11-01 Sheaufoong Chong Image overlay in a mobile device
US20120287344A1 (en) * 2011-05-13 2012-11-15 Hoon Choi Audio and video data multiplexing for multimedia stream switch
US20140078265A1 (en) * 2011-05-19 2014-03-20 Sony Computer Entertainment Inc. Moving picture capturing device, information processing system, information processing device, and image data processing method
US8803991B2 (en) * 2011-10-12 2014-08-12 Cisco Technology, Inc. Snapshot capture in video stream
US20130150161A1 (en) * 2011-12-13 2013-06-13 Empire Technology Development, Llc Graphics render matching for displays
US20140006977A1 (en) * 2012-03-30 2014-01-02 Karriem Lateff Adams Integrated social network internet operating system and management interface
US20130293664A1 (en) * 2012-05-02 2013-11-07 Research In Motion Limited Systems and Methods to Manage Video Chat Contacts
US20130328997A1 (en) * 2012-06-08 2013-12-12 Samsung Electronics Co., Ltd Multiple channel communication using multiple cameras
US20140002576A1 (en) * 2012-06-28 2014-01-02 Microsoft Corporation Cross-Process Media Handling in a Voice-Over-Internet Protocol (VOIP) Application Platform
US20140043358A1 (en) * 2012-08-07 2014-02-13 Intel Corporation Media encoding using changed regions
US20140082638A1 (en) * 2012-09-14 2014-03-20 Kuo Chung GAN Multi-user computer system
US20140098185A1 (en) * 2012-10-09 2014-04-10 Shahram Davari Interactive user selected video/audio views by real time stitching and selective delivery of multiple video/audio sources
US20140118477A1 (en) * 2012-10-26 2014-05-01 Google Inc. Video chat encoding pipeline
US8542265B1 (en) * 2012-10-26 2013-09-24 Google, Inc Video chat encoding pipeline
US20140176548A1 (en) * 2012-12-21 2014-06-26 Nvidia Corporation Facial image enhancement for video communication
US20140269401A1 (en) * 2013-03-14 2014-09-18 General Instrument Corporation Passive measurement of available link bandwidth

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150228106A1 (en) * 2014-02-13 2015-08-13 Vixs Systems Inc. Low latency video texture mapping via tight integration of codec engine with 3d graphics engine
CN104768063A (en) * 2015-04-07 2015-07-08 天脉聚源(北京)教育科技有限公司 Video coding method and device
US20180007115A1 (en) * 2016-07-01 2018-01-04 Cisco Technology, Inc. Fog enabled telemetry embedded in real time multimedia applications
CN108289185A (en) * 2017-01-09 2018-07-17 腾讯科技(深圳)有限公司 A kind of video communication method, device and terminal device
US20220021752A1 (en) * 2020-04-29 2022-01-20 Citrix Systems, Inc. Image acquisition device virtualization for remote computing
US11595482B2 (en) * 2020-04-29 2023-02-28 Citrix Systems, Inc. Image acquisition device virtualization for remote computing
CN111741343A (en) * 2020-06-17 2020-10-02 咪咕视讯科技有限公司 Video processing method and device and electronic equipment

Also Published As

Publication number Publication date
JP2015019365A (en) 2015-01-29
CN104284129B (en) 2018-06-29
US9232177B2 (en) 2016-01-05
JP2016077001A (en) 2016-05-12
JP5879609B2 (en) 2016-03-08
CN104284129A (en) 2015-01-14
EP2824930A1 (en) 2015-01-14
JP6322834B2 (en) 2018-05-16

Similar Documents

Publication Publication Date Title
US9232177B2 (en) Video chat data processing
US10257510B2 (en) Media encoding using changed regions
KR101634500B1 (en) Media workload scheduler
JP6263830B2 (en) Techniques for including multiple regions of interest indicators in compressed video data
US20200151964A1 (en) Scalable real-time face beautification of video images
US10045079B2 (en) Exposing media processing features
US20140198838A1 (en) Techniques for managing video streaming
TWI557683B (en) Mipmap compression
US20150312574A1 (en) Techniques for low power image compression and display
US20150312524A1 (en) Encrypted Screencasting
CN105103512B (en) Method and apparatus for distributed graphics processing
CN112887608A (en) Image processing method and device, image processing chip and electronic equipment
US20140330957A1 (en) Widi cloud mode
US9292898B2 (en) Conditional end of thread mechanism
CN114697731B (en) Screen projection method, electronic equipment and storage medium
US20140146896A1 (en) Video pipeline with direct linkage between decoding and post processing
TWI539795B (en) Media encoding using changed regions
WO2024063928A1 (en) Multi-layer foveated streaming

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VENKATASUBRAMANIAN, SANAKARANARAYANAN;REEL/FRAME:031257/0044

Effective date: 20130802

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

ZAAA Notice of allowance and fees due

Free format text: ORIGINAL CODE: NOA

ZAAB Notice of allowance mailed

Free format text: ORIGINAL CODE: MN/=.

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20240105