US20100138489A1 - Method and Apparatus for Distributing Multimedia to Remote Clients - Google Patents

Method and Apparatus for Distributing Multimedia to Remote Clients Download PDF

Info

Publication number
US20100138489A1
US20100138489A1 US12/702,989 US70298910A US2010138489A1 US 20100138489 A1 US20100138489 A1 US 20100138489A1 US 70298910 A US70298910 A US 70298910A US 2010138489 A1 US2010138489 A1 US 2010138489A1
Authority
US
United States
Prior art keywords
client
stream
video
client computer
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/702,989
Inventor
Janine W. Corley
Jeffrey M. Davey
Bruce W. Johnson
Dale R. Johnson
Christopher D. Lund
Tien H. Nguyen
Keith O. Rice
Leonid B. Volfson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VIVA VISION Inc
Original Assignee
VIVA VISION Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VIVA VISION Inc filed Critical VIVA VISION Inc
Priority to US12/702,989 priority Critical patent/US20100138489A1/en
Publication of US20100138489A1 publication Critical patent/US20100138489A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/303Terminal profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1101Session protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/752Media network packet handling adapting media to network capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/756Media network packet handling adapting media to device capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS

Definitions

  • the invention relates to devices and systems for communicating over a network. More particularly, the invention relates to a method and apparatus for streaming a multimedia signal to remote viewers connected to a communication network.
  • the constantly increasing processing power available in hardware devices such as personal computers, personal digital assistants, wireless phones and other consumer devices allows highly complex functions to be performed within the device.
  • the hardware devices can perform complex calculations in order to implement functions such as spreadsheets, word processing, database management, data input and data output. Common forms of data output include video and audio output.
  • a personal computer incorporates a monitor as the display terminal.
  • the monitor, or display, on most personal computers can be configured independently of the processor to allow varying levels of resolution.
  • the display for personal computers is typically capable of very high resolution, even on laptop-style computers,
  • displays are permanently integrated into personal digital assistants and wireless phones.
  • An electronic device having a dedicated display device formats data for display using dedicated hardware.
  • the processing capabilities of the hardware as well as the display capabilities limit the amount of information displayed and the quality of the display to levels below that typically available from a personal computer.
  • the lower quality is defined as fewer pixels per inch, the inability to display colors or a smaller viewing area.
  • a personal computer may integrate one of a number of hardware interfaces in order to display video output on a monitor.
  • a modular video card or a set of video interface Integrated Circuits (IC's) is used by the personal computer to generate the digital signals required to generate an image on the monitor.
  • the digital signals used by a computer monitor differ from the analog composite video signal used in a television monitor.
  • the personal computer may incorporate dedicated hardware, such as a video capture card, to translate analog composite video signals into the digital signals required to generate an image on the monitor.
  • the personal computer may display, on the monitor, video images captured using a video camera, or video images output from a video source such as a video tape recorder, digital video disk player, laser disk player, or cable television converter.
  • the video capture card also allows the personal computer to save individual video frames provided from a video source.
  • the individual video frames may be saved in any file format recognized as a standard for images.
  • a common graphic image format is the Joint Photographic Experts Group (JPEG) format that is defined in International Organization for Standardization (ISO) standard ISO-10918 titled DIGITAL COMPRESSION AND CODING OF CONTINUOUS-TONE STILL IMAGES.
  • JPEG Joint Photographic Experts Group
  • ISO International Organization for Standardization
  • Files can be distributed among hardware devices in electronic form through mechanical means, such as by saving a file onto a portable medium and transferring the file from the portable medium (e.g., floppy disks) to another computer.
  • Such mechanical file transfers are not particularly efficient and may be limited by the capacity of the transfer medium.
  • a more efficient method of transferring files between computers is by using some type of communication link.
  • the most basic communication link is a hardwired connection between the two computers transferring information.
  • information may also be transferred using a network of computers.
  • a computer may be connected to a local network where a number of processors are linked together using dedicated communication links.
  • File transfer speed on a dedicated network is typically constrained by the speed of the communication hardware.
  • the physical network is typically hardwired and capable of providing a large signal bandwidth.
  • More widespread remote networks may take advantage of existing infrastructure in order to provide the communication link between networked processors.
  • One common configuration allows remote devices to connect to a network using telephone land lines.
  • the communication link is the factor constraining data transfer speed where low bandwidth communication links such as telephone land lines are used as network connections.
  • Internet refers to a network or combination of networks spanning any geographical area, such as a local area network, wide area network, regional network, national network, and/or global network.
  • Internet may refer to hardwire networks, wireless networks, or a combination of hardwire and wireless networks.
  • Hardwire networks may include, for example, fiber optic lines, cable lines, ISDN lines, copper lines, etc.
  • Wireless networks may include, for example, cellular systems, personal communication services (PCS) systems, satellite communication systems, packet radio systems, and mobile broadband systems.
  • PCS personal communication services
  • Individual computers may connect to the Internet using communication links having vastly differing information bandwidths.
  • the fastest connections to the network use fiber connections directly to the network “backbone”. Connections to the network having a lower information bandwidth use E1 or T1 telephone line connections to a fiber link. Of course, the cost of the communication link is proportional to the available information bandwidth.
  • Network connections are not limited to computers. Any hardware device capable of data communication may be connected to a network.
  • Personal digital assistants as well as wireless phones typically incorporate the ability to connect to networks in order to exchange data.
  • Hardware devices often incorporate the hardware or software required to allow the device to communicate over the Internet.
  • the Internet operates as a network to allow data transfer between computers, network-enabled wireless phones, and personal digital assistants.
  • One potential use of networks is the transfer of graphic images and audio data from a host to a number of remote viewers.
  • a computer can store a number of captured graphic images and audio data within its memory. These files can then be distributed over the network to any number of viewers.
  • the host can provide a simulation of real-time video by capturing successive video frames from a source, digitizing the video signal, and providing access to the files. A viewer can then download and display the successive files. The viewer can effectively display real-time streaming video where the host continually captures, digitizes, and provides files based on a real-time video source.
  • a host captures the video and audio signals and generates files associated with each type of signal.
  • graphic images are commonly stored as JPEG encoded images.
  • JPEG encoding can compress the size of the graphic image file but, depending on the graphic resolution selected by the host, the image file may still be very large.
  • the network connection at the host is an initial bottleneck to efficient file transfer. If the host sends files to the network using only a phone modem connection to transfer multiple megabyte files, no viewer will be able to display the video and audio signals in a manner resembling real-time streaming video.
  • the viewer's network connection becomes another data transfer bottleneck even if the host can send files to the network instantaneously.
  • a viewer with a phone modem connection will not be able to transfer high-resolution images at a speed sufficient to support real-time streaming video.
  • One option is for the host to capture and encode any images in the lowest possible resolution to allow even the slowest connection to view real-time streaming video.
  • the effect of capturing low-resolution images to enable the most primitive system's access to the images is to degrade the performance of a majority of viewers. Additionally, the images may need to be saved in such a low resolution that all detail is lost from the images. Degradation of the images, therefore, is not a plausible solution.
  • the viewer has no control over the images.
  • the viewer must rely solely on the host to provide a formatted and sized image having the proper view, resolution, and image settings.
  • the viewer cannot adjust the image being displayed, the image resolution, or the image settings such as brightness, contrast and color.
  • the viewer is unable to control such parameters as compression of the transmitted data and the frame rate of video transmission.
  • the present invention is directed to an apparatus and method of transferring video and/or audio data to viewers such that the viewers can effectively display real-time streaming video output and continuous audio output.
  • the apparatus and method may adapt the streaming video to each viewer such that system performance is not degraded by the presence of viewers having slow connections or by the presence of viewers having different hardware devices.
  • the apparatus and method can further provide a level of image control to the viewer where each viewer can independently control the images received.
  • FIG. 1 is a block diagram of one embodiment of a multimedia distribution system.
  • FIG. 2 is an overview of the main program shown in FIG. 1 .
  • FIG. 3 is a block diagram of a personal computer implementing the host process.
  • FIG. 4A is a diagram illustrating the video capture module.
  • FIG. 4B is a flow chart illustrating the function of the switching system.
  • FIG. 5A is a block diagram of a multimedia distribution module wherein the host operates as a server.
  • FIG. 5B is a block diagram illustrating the broadcast of video data by a web server.
  • FIG. 6 is a block diagram of a video stream format.
  • FIG. 7 is a block diagram of various video block formats.
  • FIG. 8 is a flow chart illustrating motion detection at a block level.
  • FIG. 9 is a flow chart illustrating motion detection at a frame level.
  • FIG. 10 is a flow chart illustrating a method of transmitting only those video image blocks that change.
  • FIG. 11 is a block diagram of an audio stream format.
  • FIG. 12 is a flow chart illustrating the encoding and generation of an audio frame.
  • FIG. 13 is a block diagram illustrating the broadcast of audio data by a web server.
  • FIG. 14 is a flow chart illustrating the dynamic updating of the domain name system.
  • FIG. 15 is a block diagram of a system for mirroring audio and video data.
  • a computer including one or more computers comprising a web server, may be any microprocessor- or processor-controlled device or system that permits access to a network, including terminal devices, such as personal computers, workstations, servers, clients, mini computers, main-frame computers, laptop computers, a network of individual computers, mobile computers, palm-top computers, hand-held computers, set top boxes for a television, interactive televisions, interactive kiosks, personal digital assistants, interactive wireless communications devices, mobile browsers, or a combination thereof.
  • the computers may further possess input devices such as a keyboard, mouse, touchpad, joystick, pen-input-pad, and output devices such as a computer screen and a speaker.
  • These computers may be uni-processor or multi-processor machines. Additionally, these computers include an addressable storage medium or computer accessible medium, such as random access memory (RAM), an electronically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), hard disks, floppy disks, laser disk players, digital video devices, compact disks, video tapes, audio tapes, magnetic recording tracks, electronic networks, and other techniques to transmit or store electronic content such as, by way of example, programs and data.
  • the computers are equipped with a network communication device such as a network interface card, a modem, or other network connection device suitable for connecting to a networked communication medium.
  • the computers execute an appropriate operating system such as Linux, Unix, Microsoft® Windows®, Apple® MacOS®, and IBM® OS/2®.
  • the appropriate operating system includes a communications protocol implementation which handles all incoming and outgoing message traffic passed over a network.
  • the operating system will continue to provide the appropriate communications protocols necessary to establish communication links with a network.
  • the computers may advantageously contain program logic, or other substrate configuration representing data and instructions, which cause the computer to operate in a specific and predefined manner as described herein.
  • the program logic may advantageously be implemented as one or more modules.
  • each of the modules may comprise various sub-routines, procedures, definitional statements and macros.
  • Each of the modules is typically separately compiled and linked into a single executable program. Therefore, the description of each of the modules in this disclosure is used for convenience to describe the functionality of the preferred system. Thus, the processes that are performed by each of the modules may be arbitrarily redistributed to one of the other modules, combined together in a single module, or made available in, for example, a shareable dynamic link library.
  • the modules may advantageously be configured to reside on the addressable storage medium and configured to execute on one or more processors.
  • the modules include, but are not limited to, software or hardware components which perform certain tasks.
  • a module may include, by way of example, components, such as, software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, Java byte codes, circuitry, data, databases, data structures, tables, arrays, and variables.
  • multimedia refers to data in any form.
  • it may include video frames, audio blocks, text data, or any other data or information.
  • Multimedia information may include any individual form or any combination of the various forms.
  • FIG. 1 A block diagram of a multimedia distribution system according to aspects of the invention is shown in FIG. 1 .
  • the system is composed of a host 10 interfaced through, for example, a network 20 to at least one client 30 .
  • the host 10 is a computer including one or more processes or modules and may interface with various hardware devices on the computer.
  • a process or module may be a set of instructions implemented in software, firmware or hardware, including any type of programmed step undertaken by components of the system.
  • the client 30 is another computer including one or more process or modules.
  • the client 30 is a remote computer interconnected to the host 10 through a network 20 .
  • the network 20 is any type of communication network as is commonly known by one skilled in the field and as was described previously.
  • the network 20 may be a Local Area Network (LAN), a Wide Area Network (WAN), a public network such as the Internet, or a wireless network or any combination of such networks.
  • the network 20 interconnection between the host 10 and the client 30 may be accomplished using hard wired lines or through wireless Radio Frequency (RF) links.
  • RF Radio Frequency
  • a number of processes operate within the host 10 in order to allow the host 10 to interface with external devices 80 and with the client 30 through the network 20 .
  • One or more capture devices 42 interface with external devices 80 in order to transform the data provided by an external device 80 into a format usable by the host 10 .
  • the capture device 42 is a video capture card that interfaces to an external video source.
  • the video source may be generated by a video camera, video disc player, video cassette recorder, television video output, or any other device capable of generating a video source.
  • the video capture card grabs the frames from the video source, converts them to digital signals, and formats the digital signals into a format usable by the host 10 .
  • the external device 80 may also be a video card within a computer for converting video signals that are routed to a monitor into a format usable by the host 10 .
  • the external devices 80 are not limited to video sources and can include devices or sources of data of interest.
  • the external devices 80 may generate audio data.
  • the capture device 42 interfaces with an audio source to convert the input signal to a digital signal, then to convert the digital signals into a format usable by the host 10 .
  • a variety of external devices 80 may be used to provide an audio signal.
  • An audio signal may be provided from a microphone, a radio, a compact disc player, television audio output, or any other audio source.
  • Multiple external devices 80 may interface with the host 10 .
  • the external devices 80 may provide inputs to the host 10 simultaneously, sequentially, or in some combination.
  • a switcher module 44 is used where there is a controllable switch (not shown) that is used to multiplex signals from multiple sources to a single capture device 42 .
  • the switcher 44 is used where multiple sources are controlled and is omitted if the host 10 does not have control over the selection of the source. If used, the switcher 44 receives control information through a communication port on the computer.
  • An exemplary embodiment of a hardware switch used to multiplex multiple video sources to a single video capture card is provided in copending U.S. patent application Ser. No. 09/439,853, filed Nov. 12, 1999, entitled SIGNAL SWITCHING DEVICE AND METHOD, assigned to the assignee of the current application, and hereby incorporated herein by reference.
  • a similar hardware switch may be used to multiplex multiple audio sources to a single audio capture card.
  • a multimedia operating system module 49 allows the capture devices to interface with one or more capture modules 40 a, 40 b.
  • the capture modules 40 a, 40 b monitor the capture devices and respond to requests for images by transmitting the captured information in JPEG-encoded format, for example, to the main program module 46 .
  • the host also includes a web server module 50 , such as the Apache web server available from the Apache Software Foundation.
  • the web server 50 is used to configure the host 10 as a web server.
  • the web server 50 interfaces the host 10 with the various clients 30 through the network 20 .
  • the web server 50 sets up an initial connection to the client 30 following a client request.
  • One or more Common Gateway Interfaces (CGI) 52 a, 52 b are launched for each client 30 by the web server module 50 .
  • Each CGI 52 submits periodic requests to the main program 46 for updated video frames or audio blocks.
  • the web server 50 also configures the dedicated CGI 52 adapted to the capabilities of each client 30 .
  • the client 30 may monitor the connection and maintains some control over the information sent through the CGI 52 .
  • the client 30 can cause the web server 50 to launch a “set param” CGI module 54 to change connection parameters.
  • the web server 50 conveys the control information to the other host processes through the “set param” CGI 54 .
  • the CGI 52 controls the information flow to the client 30 .
  • the client 30 interfaces to the host through the network 20 using an interface module such as a browser 32 .
  • a browser 32 Commercially available browsers include Netscape Navigator and Microsoft's Internet Explorer.
  • the browser 32 implements the communication formatting and protocol necessary for communication over the network 20 .
  • the client 30 is typically capable of two-way communications with the host 10 .
  • the two-way link allows the client 30 to send information as well as receive information.
  • a TCP/IP socket operating system module 59 running on the host 10 allows the host to establish sockets for communication between the host 10 and the client 30 .
  • the host 10 may also incorporate other modules not directly allocated to establishing communications to the client 30 .
  • an IP PROC 60 may be included within the host 10 when the host 10 is configured to operate over, for example, the Internet.
  • the IP PROC 60 is used to communicate the host's 10 Internet Protocol (IP) address.
  • IP Internet Protocol
  • the IP PROC 60 is particularly useful when the host's IP address is dynamic and changes each time the host 10 initially connects to the network 20 .
  • the IP PROC 60 at the host 10 works in conjunction with a Domain Name System (DNS) host server 90 (described in further detail below with reference to FIG. 14 ) connected to the network to allow clients 30 to locate and establish a connection to the host 10 even though the host 10 has a dynamic IP address.
  • DNS Domain Name System
  • the host implements a user interface 204 to receive input from the user through, for example, a keyboard or a mouse and to provide display and audio output to the user.
  • the output may be in the form of an operating window displayed on a monitor that provides the user with an image display and corresponding control menus that can be accessed using a keyboard, a mouse or other user interface devices.
  • a scheduler 210 operates simultaneously with the user interface 204 to control the operation of various modules.
  • the user or an administrator of the host system may set up the scheduling of multimedia capture using the scheduler 210 . Images or audio may be captured over particular time windows under the control of the scheduler 210 and those time windows can be selected or set by a user.
  • a licensing module 214 is used to either provide or deny the user access to specific features within the system. As is described in detail below, many features may be included in the system. The modularized design of the features allows independent control over user access to each feature. Independent control over user access allows the system to be tailored to the specific user's needs. A user can initially set up the minimum configuration required to support the basic system requirements and then later upgrade to additional features to provide system enhancements. Software licensing control allows the user access to additional features without requiring the user to install a new software version with the addition of each enhancement.
  • the host also performs subsystem control processes 220 .
  • the host oversees all of the subsystem processes that are integrated into the multimedia distribution system.
  • These sub-processes include the multimedia capture system 230 that controls the capture of the video and audio images and the processing and formatting of the captured data.
  • a motion detection 240 process operates on the captured images to allow detection of motion over a sequence of the captured images. Motion detection can be performed on the entire image or may be limited to only a portion of the image. The operation of motion detection will be discussed in detail later.
  • the event response 250 process allows a number of predefined events to be configured as triggering events.
  • the triggering event may be the passage of time, detection of audio, a particular instant in time, user input, or any other event that the host process can detect.
  • the triggering events cause a response to be generated.
  • the particular response is configurable and may include generation and transmission of an email message, generation of an audio alert, capture and storage of a series of images or audio, execution of a particular routine, or any other configurable response or combination of responses.
  • Additional processes include an FTP process 260 and an IP Updater process 270 .
  • the FTP process transfers the multimedia data to an FTP server to allow widespread access to the data.
  • the IP Updater 270 operates to update the IP address of the host.
  • the host may be identified by a domain name that is easily remembered.
  • the domain name corresponds to an Internet Protocol address, but the host process may be connected to a network that utilizes dynamic IP addresses.
  • the IP address of the server may change each time the host disconnects and reconnects to the network if dynamic IP addresses are used.
  • the IP Updater 270 operates in conjunction with a Domain Name System (DNS) server to continually update the IP address of the host such that the host's domain name will always correspond to the appropriate IP address.
  • DNS Domain Name System
  • FIG. 3 An example of a computer on which the host process resides is illustrated schematically in FIG. 3 .
  • the block diagram of FIG. 3 shows the host implemented on a personal computer 300 .
  • the host process is stored as a collection of instructions that are stored in the personal computer 300 .
  • the instructions may be stored in memory 304 , such as Read-Only Memory (ROM) or Random Access Memory (RAM), a hard disk 306 , a floppy disk to be used in conjunction with a floppy disk drive 308 , or a combination of storage devices.
  • the instructions are executed in the Central Processing Unit (CPU) 302 and are accessed through a bus 360 coupling the storage devices 304 , 306 , 308 to the CPU 302 .
  • CPU Central Processing Unit
  • the bus 360 can include at least one address bus and one data bus, although multiple buses may also be used.
  • User input is coupled to the personal computer 300 through a keyboard 310 , a mouse 312 or other user input device. Images are displayed to the user through a monitor 314 that receives signals from a video controller 316 .
  • Video images are provided to the personal computer 300 from external video sources coupled to a video capture card 320 .
  • a video source may be used, a camera 322 and VCR 324 are shown in FIG. 3 .
  • a video switching system 330 is used to multiplex multiple video sources to a single video capture card 320 .
  • the video switching system 330 is controlled through a serial device controller 340 .
  • the host process controls which video source is used to supply the input by controlling the video switching system 330 .
  • the video switching system 330 is described further in the patent application previously incorporated by reference and is described below with reference to FIG. 4B .
  • external audio sources are used to provide audio input to the personal computer 300 .
  • a microphone 352 and CD player 354 are shown as the external audio sources, although any audio source may be used. Audio is coupled from the external audio sources 352 , 354 to the host process using an audio card 350 .
  • the connection from the host to the network is made using a Network Interface Card (NIC) 360 .
  • NIC Network Interface Card
  • the NIC 360 is an Ethernet card, but may be substituted with, for example, a telephone modem, a cable modem, a wireless modem or any other network interface.
  • FIG. 4A is a diagram illustrating a process for video capture using an apparatus such as that shown in FIG. 3 .
  • a video signal is generated in at least one video source 410 .
  • One video source may be used or a plurality of video sources may be used.
  • a video switching system 330 is used when a plurality of video sources 410 is present. Each video source is connected to an input port of the video switching system 330 .
  • the video switching system 330 routes one of the plurality of input video signals to the video capture hardware 320 depending on the control settings provided to the video switching system 330 through a serial communications 340 link from the switcher 44 (see FIG. 1 ).
  • Video sources such as a VCR, TV tuner, or video camera generate composite video signals.
  • the video capture hardware 320 captures a single video frame and digitizes it when the video switching system 330 routes a video source outputting composite video signals to the video capture hardware 320 .
  • the system captures an image using an Application Program Interface (API) 420 , such as Video for Windows available from Microsoft Corp.
  • API Application Program Interface
  • the API transmits the captured image to the video capture module 430 .
  • FIG. 4B is a flow chart illustrating the function of the video switching module 330 shown in FIGS. 3 and 4A .
  • the video subsystem maintains a cache of time stamped, video images for each video-input source. Requests for data are placed on a queue in the serial communications module 340 .
  • the video switching module 330 receives a request from the queue (step 452 )
  • it first determines whether the requested image is available (step 454 ).
  • the requested image may be unavailable if, for example, the image is in the process of being captured. If the image is not available, the process returns to step 452 and attempts to process the request again at step 454 .
  • the switching module 330 determines whether the image already exists in the cache (step 456 ).
  • the switching module 330 sends the image to the requesting CGI 52 a, 52 b (see FIG. 1 ) and removes the request from the queue (step 468 ). If the image does not exist in the cache, the switching module 330 proceeds to obtain the image. First, it determines whether the switcher is set to the source of the requested image (step 458 ). If the switcher is set to the proper source, the image is captured and placed in the cache (step 466 ). The image is then sent to the requesting CGI and the request is removed from the CGI (step 468 ). If the switcher is not set to the proper source, the switching module 330 causes a command to be sent to the switcher to switch to the source of the requested image ( 460 ).
  • step 462 optional operations may be performed to empty pipelines in the capture device's hardware or driver implementation. This is determined via test and interaction with the device during installation.
  • the switching module 330 then waits a predetermined length of time (step 464 ). This delay allows the video capture device to synchronize with the new video input stream.
  • the requested image is then captured and placed in the cache (step 466 ).
  • the image is then sent to the requesting CGI, and the request is removed from the queue (step 468 ). Once the request has been removed, the switching module 330 returns to the queue to process the next request.
  • the above description relates to the switching of video inputs, it may also apply to any switching module including, for example, the multimedia switcher 44 illustrated in FIG. 1 .
  • Audio signals are captured in a process (not shown) similar to video capture. Audio sources are connected to multimedia audio hardware in the personal computer.
  • the audio capture module makes periodic requests through an API such as Windows Multimedia, available from Microsoft Corp., for audio samples and makes the data available as a continuous audio stream.
  • the host 10 distributes the multimedia data to requesting clients once the multimedia data has been captured.
  • the host is configured as a web server 50 in order to allow connections by numerous clients runs the host multimedia distribution application.
  • the client 30 can be a remote hardware system that is also connected to the network.
  • the client may be configured to run a Java-enabled browser.
  • the term “browser” is used to indicate an application that provides a user interface to the network, particularly if the network is the World Wide Web.
  • the browser allows the user to look at and interact with the information provided on the World Wide Web.
  • a variety of commercially available browsers are available for computers.
  • compact browsers are available for use in portable devices such as wireless phones and personal digital assistants.
  • the features available in the browser may be limited by the available processing, memory, and display capabilities of the hardware device running the browser.
  • Java is a programming language developed especially for writing client/server and networked applications.
  • a Java applet is commonly sent to users connected to a particular web site.
  • the Java archive, or Jar, format represents a compressed format for sending Java applets.
  • instructions contained in the Java applet are compressed to enable faster delivery across a network connection.
  • a client running a Java-enabled browser can connect to the server and request multimedia images.
  • Wireless devices may implement browsers using the Wireless Application Protocol (WAP) or other wireless modes.
  • WAP is a specification for a set of communication protocols to standardize the way that wireless devices, such as wireless phones and radio transceivers, are used for Internet access.
  • a client 30 initially connecting via the network 20 to the host makes a web request, or Type I request 512 , while logged on a website.
  • the term “website” refers to one or more interrelated web page files and other files and programs on one or more web servers.
  • the files and programs are accessible over a computer network, such as the Internet, by sending a hypertext transfer protocol (HTTP) request specifying a uniform resource locator (URL) that identifies the location of one of the web page files.
  • HTTP hypertext transfer protocol
  • URL uniform resource locator
  • the files and programs may be owned, managed or authorized by a single business entity or an individual.
  • Such files and programs can include, for example, hypertext markup language (HTML) files, common gateway interface (CGI) files, and Java applications.
  • HTML hypertext markup language
  • CGI common gateway interface
  • a “web page” comprises that which is presented by a standard web browser in response to an HTTP request specifying the URL by which the web page file is identified.
  • a web page can include, for example, text, images, sound, video, and animation.
  • the server performs Type I processing 510 in response to the Type I request 512 from the client.
  • Type I processing the server opens a communication socket, designated socket “a” in FIG. 5A , and sends a Jar to the client.
  • the first communication socket, socket “a,” is closed once the Jar is sent to the client.
  • the client then extracts the Jar and runs it as a video applet once the entire Jar arrives at the client system.
  • the functionality of the video applet can be implemented by software or firmware at the client.
  • the video applet running on the client system makes a request to the server running on the host.
  • the request specifies parameters necessary for activation of a Common Gateway Interface (CGI) necessary for multimedia distribution.
  • CGI Common Gateway Interface
  • the video applet request may supply CGI parameters for video source selection, frame rate, compression level, image resolution, image brightness, image contrast, image view, and other client configurable parameters.
  • the specific parameters included in the request can be determined by which button or link was selected as part of the Type I request.
  • the web page may offer a separate button or link for each of several classes of clients. These classes refer to the capability of clients to receive data in specific formats and at specific rates.
  • one button may correspond to a request for the data at a high video stream rate (30 frames per second) while another button corresponds to a request for the data in simple JPEG (single frame) format.
  • the video applet can survey the capabilities of the client system and select appropriate parameters based upon the results of the survey, or the video applet can respond to user input.
  • the server receives the video applet request and, in response, establishes a communication port, denoted socket “b,” between the server and the client.
  • the server then launches a CGI using the parameters supplied by the video applet request and provides client access on socket “b.”
  • the video CGI 530 established for the client then sends the formatted video image stream over the socket “b” connection to the video applet running on the client.
  • the video applet running on the client receives the video images and produces images displayed at the client.
  • the applet may be configured to perform a traffic control function.
  • the client may have requested a high stream rate (e.g., 30 frames per second) but may be capable of processing or receiving only a lower rate (e.g., 10 frames per second).
  • This reduced capability may be due, for example, to network transmission delays or to other applications running on the client requiring more system resources.
  • the server Once a transmission buffer memory is filled, the server is unable to write further data.
  • the applet detects this backup, it submits a request to the server for a reduced stream rate.
  • This request for change is submitted via, for example, a “set parameter” CGI 570 , or a frame rate CGI, which is described in further detail below with reference to FIG. 5B .
  • the applet can compare a timestamp embedded in each frame (described below with reference to FIG. 6 ) with the client's internal clock, for example. By detecting a change in the relative time between consecutive frames, the applet is able to recognize the backup and skip processing of delayed frames. Thus, the client proceeds to process the current frame rather than an old frame. For example, if the client receives 30 frames per second and can only process one frame per second, the applet will cause the client to process the first frame, skip the next 29 frames and process the 31st frame.
  • the client can also select to view only a portion of the image. For example, the client may select a region of the image that he wishes to magnify.
  • the applet allows the client to submit a request to the CGI to transmit only blocks corresponding to the selected region. By selecting only the selected blocks, the necessary bandwidth for transmission is further reduced.
  • the client can zoom to any region of the captured image.
  • the client may submit a request, via the applet, to pan across the image in any direction, limited only by the boundaries of the captured image. The applet submits this request as a change in the requested region.
  • the video CGI 530 determines, according to the parameters passed by the video applet, whether to submit a request for an additional video frame and whether to send the additional information to the client.
  • a similar audio CGI 560 is established using an audio applet running on the client. Each time an audio block is encoded at the server, it is available to be transmitted to the client. The audio CGI 560 transmits the audio information to the client as a continuous stream.
  • the applet may be configured to perform an audio traffic control function similar to that described above with respect to the video CGI 530 .
  • the client may have initially requested an 8-bit audio stream but may be capable of only handling a 4-bit or a 2-bit stream.
  • 2-bit and 4-bit audio streams are encoded based on adaptive pulse code modulation encoding (ADPCM) as described by Dialogic Corporation.
  • ADPCM adaptive pulse code modulation encoding
  • the 4-bit audio samples are generated from 16-bit audio samples at a fixed rate.
  • the 2-bit audio encoder modifies the standard ADPCM by removing the two lowest step bits, resulting in 2-bit samples from the original 16-bit data.
  • An 8-bit stream is generated by converting 16-bit samples into 8-bits using a ⁇ -law encoder which is utilized in the Sun Microsystems, Inc. audio file format. This encoder is defined as the ITU-T standard G.711.
  • the applet When the applet detects a discrepancy between the transmitted audio data and the capabilities of the client, it submits a request for change to the server.
  • the audio CGI 560 then closes the audio stream and reopens it at the appropriate data rate.
  • the client determines the type of CGI that controls the information flowing to it on socket b by making the appropriate request.
  • a JPEG Push CGI 540 or a Wireless Access Protocol (WAP) CGI 550 no applet is involved and no socket “b” is established.
  • WAP-enabled device requests a WAP CGI 550 to be set up at the server.
  • Video frames are then routed to the WAP-enabled device using the WAP CGI in lieu of the video CGI 530 via socket “a”. The video frames are routed to the client as JPEG files.
  • a JPEG Push CGI 540 is set up at the server if the client requests JPEG Push.
  • the web server 510 establishes a separate socket b connection to the server and utilizes a separate CGI that is appropriate for its capabilities, for that particular client.
  • An additional CGI that utilizes a socket is the “set parameter” CGI 570 .
  • a client may revise the parameters that control the received images and audio by adjusting controls that are available on the video applet.
  • the “set parameter” CGI 570 is launched to change the parameters at the server. It can be seen that each individual client may change the CGI settings associated with that particular client without affecting the images or audio being sent to any other client. Thus, each individual client has control over its received multimedia without affecting the capture process running on the server system.
  • FIG. 5B is a block diagram illustrating the streaming of the video data by the host to clients and the flow of commands and information between components of the host and the client.
  • the video streaming begins when the client, via the remote user's web browser 505 a, sends a request (indicated by line 581 ) to the host server system 510 .
  • the request is an HTTP request.
  • the server system 510 sends (line 582 ) a Jar to the client's web browser 505 .
  • the Jar includes an applet that is launched by the client's web browser 505 .
  • the applet then sends a request to the web server 510 for the web server 510 to launch a CGI (line 583 ). Additionally, the applet causes the client to send client-specific parameters to the web server 510 . In response to the request, the web server 510 establishes a socket and launches a CGI 530 according to the parameters supplied by the client and information associated with the socket (line 584 ). The CGI 530 submits periodic requests for video information to a video encoder 525 (line 585 ).
  • the video encoder 525 receives JPEG-encoded video data from a video capture module 515 and formats the data for streaming as described, for example, below with reference to FIGS. 6 and 7 (line 586 ).
  • the encoder 525 responds to the requests from the CGI 530 by transmitting the encoded video information to the CGI 530 (line 585 ).
  • the video encoder module 525 and the video CGI module 530 may be sub-modules in the video CGI 52 a shown in FIG. 1 .
  • the CGI 530 transmits the encoded video frames to the applet over the established socket (line 587 ).
  • the applet decodes the encoded audio frames, providing audio to the user.
  • the applet may be configured to perform a traffic control function.
  • the applet When the applet is launched on the remote viewer's browser 505 b, it launches a frame-rate monitoring thread 535 (line 591 ).
  • the thread 535 monitors the video stream for frame delays (step 545 ) by, for example, comparing time stamps of video frames with the client's internal clock, as described above.
  • the video applet continuously checks for frame delays (line 593 ).
  • the applet requests that the web server 510 launch a frame-rate CGI 555 .
  • the request also submits parameters to indicate the frame rate capabilities of the client.
  • the parameters are submitted to the video CGI 530 (line 595 ) which changes the rate at which video is streamed to the user.
  • the video CGI compresses and formats the video images for streaming in order to reduce the required network bandwidth.
  • the video applet running on the client extracts the video image from the compressed and encoded data.
  • a block diagram of the video stream format is shown in FIG. 6 .
  • the video stream can be formatted in several ways with each format transmitting separate video image information. All video stream formats are comprised of a single six-byte header 602 followed by a number of video blocks 604 a - 604 nn.
  • the six-block header 602 is made up of a one-byte error code 610 , a one-byte source 612 , and a four-byte connection ID 614 .
  • the one-byte error code 610 indicates whether an error is present in the transmission.
  • a zero value error code 610 indicates a successful transmission follows.
  • a non-zero error code indicates an error has been detected and no data blocks will follow.
  • the non-zero error code 610 therefore, indicates the data stream is complete.
  • the one-byte source 612 indicates the origin of the video image.
  • a zero value source 612 indicates the host as the source of the video image.
  • a one in the source 612 indicates the image is coming from a mirror site. The use of a mirror site is discussed in detail below. Use of a mirror site is not otherwise detectable by the client and does not degrade the image received at the client.
  • the four-byte connection ID 614 is used to designate the specific client.
  • the connection ID 614 is an identifier that is unique to each connected
  • a series of video blocks 604 follow the header 602 .
  • Different video block formats are used to transmit different size video images.
  • all video block formats utilize a structure having a four-byte frame size field 620 followed by a four-byte block type field 622 , followed by block data fields 624 .
  • a first type of video block 604 is defined as block type N, where N represents a positive integer defining the number of image segments encoded in the block.
  • a block type N format utilizes a data triplet to define each of N video segments.
  • Each of the N data triplets contains a four-byte X position field 632 , a four-byte Y position field 634 , and a four-byte width field 636 .
  • the X and Y positions define the location of the segment on the client screen.
  • the width field 636 defines the width of the video segment.
  • the height of the video segment for the block type N video format is preset at sixteen pixels.
  • each of the data triplets defines a video stripe image that is displayed on the client screen.
  • the block type N video format utilizes a series of data blocks.
  • a four-byte data offset field 640 is used to facilitate faster transmission of data by not transmitting identical bytes of data at the beginning of each image. For example, two consecutive images may have the identical first 600 bytes of data.
  • the data offset field 640 will be set to 600 and will prevent retransmission of those 600 bytes.
  • a Data Size (DS) field 642 follows the data offset field 640 and is used to define the size of the data field that follows.
  • Two four-byte timestamp fields 644 , 646 follow the DS field 642 .
  • the first timestamp field 644 is used to timestamp the video image contained in the block type N image.
  • the timestamp 644 may be used to update a timestamp that is displayed at the client.
  • the second timestamp field 646 is used to synchronize the video stream with an audio stream.
  • the contents of the DS field 642 define the number of data bytes in the data field 648 that follows the timestamp fields 644 and 646 .
  • the information in the data field 648 is JPEG encoded to compress the video image.
  • each data triplet defines the location and width of a JPEG encoded video image stripe.
  • the image is a single video stripe in the image when all of the segments are in the same Y coordinate.
  • the initial segment 650 a is a sixteen-pixel-high segment having a width defined in the first data triplet.
  • subsequent segments 650 b - 650 n are sixteen-pixel-high segments with widths defined by the width field 636 b - 636 n of the corresponding triplet.
  • FIG. 7 Another video block type is denoted block type- 3 and is also known as a Single Block type.
  • the structure of the Single Block is shown in FIG. 7 .
  • the Single Block format begins with a pair of four-byte data fields.
  • the first four-byte data field provides the initial horizontal location, X 0 710 .
  • the second four-byte block provides the initial vertical location, Y 0 712 .
  • the coordinates X 0 710 and Y 0 712 define the upper left corner of the video image provided in the Single Block.
  • a second pair of four-byte data fields follows the first pair.
  • the second pair of data fields defines the lower right corner of the video image provided in the Single Block.
  • the first data field in the second pair provides the final horizontal position, X 1 714
  • the second data field in the pair provides the final vertical position, Y 1 716 .
  • a four-byte Data Offset field 718 follows the two pairs of coordinates.
  • a Data Size (DS) field 720 follows the Data Offset field 718 and is used to define the number of bytes in the data field 726 .
  • DS field 720 Immediately following the DS field 720 are two four-byte timestamp fields 722 and 724 to identify the time the video image was generated.
  • the video applet running on the client can extract the timestamp information in order to overlay a timestamp on the image.
  • the Single Block is completed with a data field 726 consisting of the number of data blocks defined in the DS field 720 .
  • the Single Block type defines a rectangular video image spanning the coordinates (X 0 , Y 0 )-(X 1 , Y 1 ).
  • Block type- 4 also designated a Synchronization Frame, has a data format identical to that of the above-described Single Block.
  • the initial horizontal and vertical coordinates, X 0 and Y 0 are set to zero. Setting the initial coordinates to zero aligns the upper left corner of the new image with the upper left corner of the existing image.
  • the final horizontal and vertical coordinates in the Synchronization Frame correspond to the width of the whole image and the height of the whole image, respectively. Therefore, it can be seen that the Synchronization Frame can be used to refresh the entire image displayed at the client.
  • the Synchronization Frame is used during the dynamic update of the video frame rate in order to limit transmission delays, as described above with reference to FIG. 5B .
  • Block type- 1 does not contain any image data within it. Rather it is used to indicate a change in the transmitted image size.
  • the block type- 1 format consists of a four-byte data field containing the New Width 740 , followed by a four-byte data field containing the New Height 742 .
  • the block type- 1 information must be immediately followed by a full-image Single Block or Synchronization Frame.
  • block type- 2 is designated the Error Block.
  • the Error Block consists solely of a one-byte Error Code 750 .
  • the Error Block is used to indicate an error in the video stream. Transmission of the video stream is terminated following the Error Code 750 .
  • FIG. 8 motion detection which can be carried out by the host will be described.
  • the contents of a frame can further be processed by the main program module 46 (see FIG. 1 ) as follows. Data from subsequent video frames can be compared to determine whether the frames capture motion.
  • FIG. 8 shows a flow chart of the motion detection process.
  • a JPEG-encoded frame is received from the video capture module 40 a by the main program module 46 (see FIG. 1 ).
  • the frame is first subdivided into a grid of, for example, 16 blocks by 16 blocks in order to detect motion within sequential images (step 802 ). Motion can be detected in each individual block.
  • the number of blocks used to subdivide the frame is determined by the precision with which motion detection is desired.
  • a large number of blocks per frame increases the granularity and allows for fine motion detection but comes at a cost of processing time and increased false detection of motion due to, for example, jitter in the image created by the camera or minute changes in lighting.
  • a lower number of blocks per frame provides decreased resolution but allows fast image processing.
  • the frame may be the complete image transmitted to the clients or may be a subset of the complete image. In other words, motion detection may be performed on only a specific portion of the image. The host user may determine the size and placement of this portion within the complete image, or it may be predetermined.
  • each block in the grid is motion processed (referenced in FIG. 8 as 810 ). Motion processing is performed on each block using comparisons of the present image with the previous image.
  • a cross-correlation between the block being processed of the current image and the corresponding block of the previous image is calculated.
  • the cross-correlation includes converting the captured blocks to grayscale and using the gray values of each pixel as the cross-correlated variable.
  • the variable used for cross-correlation may be related to other aspects of the image such as light frequency of pixels.
  • the cross-correlation is then compared with a predetermined threshold.
  • the predetermined cross-correlation threshold can be a static value used in the motion detection process or it can be dynamic. If the cross-correlation threshold is dynamic, it may be derived from the size of the blocks or may be set by the host user. The host user may set the cross-correlation threshold on a relative scale where the scale is relative to a range of acceptable cross-correlation values. Use of a relative scale allows the host user to set a cross-correlation threshold without having any knowledge of cross-correlation. It may be preferable for the cross-correlation threshold to be set higher when the block size is large.
  • a lower cross-correlation threshold may be preferable where the block size is small and there are not many pixels defining the block.
  • the cross-correlation threshold can be set in accordance with the environment in which the system operates (e.g., outdoor versus indoor) and the particular use of the motion detection (e.g., detecting fast movement of large objects).
  • the process next calculates the variance in the brightness of the block over the corresponding block of the previous image (step 816 ).
  • the variance is compared against a variance threshold at step 818 .
  • the variance threshold may be static or dynamically determined. If the calculated variance falls below the variance threshold then no motion is indicated in the block, and the process continues to step 890 .
  • the block is not marked as one having motion. However, if the variance exceeds the variance threshold, the block is marked as having motion at step 820 , and the process continues to step 890 .
  • the brightness variance may be calculated and compared to a variance threshold.
  • brightness variances alone may be sufficient to detect motion.
  • the preferred embodiment illustrated in FIG. 8 requires both a sufficient variance in brightness and in the cross-correlation variable.
  • the routine checks to see if all blocks have been processed. If all blocks have been processed, the motion detection routine in the main program 46 terminates (step 899 ) and returns the results to the video capture module 40 a shown in FIG. 1 . However, if not all blocks of the current image have been processed, the routine returns to motion processing (reference 810 ) to analyze the next block.
  • FIG. 9 shows a flow chart of the motion detection process performed by the main program 46 (see FIG. 1 ) on a frame level.
  • Motion detection requires comparison of at least two frames, one of which is used as a reference frame. Initially, a first frame is captured and used as the reference frame for determining motion detection (step not shown in FIG. 9 ). The first step in detecting motion is capture of the current frame (step 902 ).
  • Motion detection step 800 on the block level, as described above with reference to FIG. 8 , is performed on the captured frame using the initial frame as the reference.
  • the motion detection process calculates the fraction of blocks that have motion (step 910 ). The calculated fraction is compared against “low,” “medium,” and “high” thresholds.
  • the thresholds may be static or dynamic as described above for the thresholds in the block motion detection process (step 800 ).
  • step 920 If, at step 920 , the calculated fraction falls below the “low” threshold, then no motion has been detected in the frame, and the detection process proceeds to step 990 . However, if the calculated fraction exceeds the lowest threshold then the fraction must lie within one of three other ranges, and the process continues to step 930 .
  • the calculated fraction is compared against the “medium” threshold. If the calculated fraction does not exceed the “medium” threshold (i.e., the fraction is in the low-medium range), the process continues to step 935 .
  • the motion detection process performs “slight” responses. Slight responses may include transmitting a first email notification to an address determined by the host user, sounding an audible alert, originating a phone call to a first number determined by the host user, or initiating predetermined control of external hardware, such as alarms, sprinklers, or lights. Any programmable response may be associated with the slight responses, although advantageously, the lowest level of response is associated with the slight response.
  • the process continues to step 960 .
  • step 940 the calculated fraction is compared against the “high” threshold. If the calculated fraction does not exceed the “high” threshold (i.e., the fraction is in the medium-high range), the process continues to step 945 .
  • the motion detection process performs moderate responses. Moderate responses may include any of the responses that are included in the slight responses.
  • the moderate responses are associated with a higher level of response.
  • a second email message may be transmitted indicating the detected motion lies within the second range, or a second predetermined phone message may be directed to a phone number determined by the host user. After performing the “moderate” responses, the process continues to step 960 .
  • the process continues to step 950 .
  • the motion detection process performs severe responses.
  • the severe responses may include transmitting a third email message to a predetermined address, originating a phone call with a “severe” message to a predetermined phone number, originating a phone call to a predetermined emergency phone number, or controlling external hardware associated with severe responses. External hardware may include fire sprinklers, sirens, alarms, or emergency lights.
  • the motion detection process logs the motion and the first twelve images having motion regardless of the type of response performed.
  • the motion detection threshold is, in this manner, used as a trigger for the recording of images relating to the motion-triggering event.
  • the images are time-stamped and correlate the motion triggering event with a time frame.
  • Motion detection using this logging scheme is advantageously used in security systems or any system requiring image logging in conjunction with motion detection.
  • the motion detection process is done 940 once the twelve motion images are recorded.
  • the motion detection process may be part of a larger process such that the motion detection process repeats indefinitely. Alternatively, the motion detection process may run on a scheduled basis as determined by another process.
  • the foregoing example utilizes low, medium and high thresholds, fewer or more thresholds can be used.
  • Additional advantages may be realized using block motion detection in conjunction with the different image encoding formats shown in FIG. 6 and FIG. 7 .
  • Transmitting a complete video image to a client requires a great deal of network bandwidth even though the image may be JPEG-encoded.
  • the amount of network bandwidth required to transmit images to a client can be reduced by recognizing that subsequent data within an image remains the same for a majority of images. Only a small fraction of the image may include data not previously transmitted to the client in a previous image.
  • the network bandwidth requirement can be reduced by transmitting only those images that change from image frame to image frame.
  • the client is not aware that the entire image is not retransmitted each time because those blocks that are not retransmitted contain no new information.
  • a process for conserving network bandwidth by transmitting only changed image blocks is performed by the video CGI 52 a (see FIG. 1 ) and is shown in FIG. 10 .
  • the process begins by capturing an image (step 1010 ).
  • the process then performs block motion detection 800 as described above with reference to FIG. 8 .
  • the oldest blocks in the image those unchanged after a predetermined number of image capture cycles, are marked as having changed even though they may remain the same. Marking the oldest blocks as having changed allows the image at the client to be refreshed over a period of time even though there may be no new information in the image frame.
  • the route the process takes diverges depending on a chosen compression level. The level of compression may be preselected by the host.
  • the host may offer the client a choice of compression levels. If low compression is selected, the process continues to step 1040 , and the image to be transmitted to the client is set to the full image frame. The process then constructs the appropriate header (step 1042 ) and creates the JPEG image for the full image frame (step 1044 ). The process then proceeds to step 1090 .
  • the process When medium compression is selected at step 1030 , the process first finds the minimum region containing changed blocks (step 1050 ). The fraction of changed blocks in the minimum region is compared to a predetermined threshold at step 1052 . If the fraction exceeds the predetermined threshold, the process constructs a header (step 1042 ), creates a JPEG image (step 1044 ), and proceeds to step 1090 . On the other hand, if the fraction is less than the predetermined threshold at step 1052 , the process continues to step 1060 .
  • step 1060 the process constructs a header and stripe image for the changed blocks and the oldest unchanged blocks and proceeds to step 1065 .
  • step 1065 the process creates a JPEG blocks for the stripe image and proceeds to step 1090 .
  • step 1090 the data is transmitted to the client.
  • FIG. 11 is a block diagram of one format of an audio stream.
  • the audio stream comprises a series of audio frames 1110 that are transmitted by the host in encoded form to the client.
  • the encoding of an audio frame is described below with reference to FIG. 12 .
  • the host also compresses the audio data to reduce the required bandwidth for transmission.
  • Each audio frame 1110 has a header 1120 followed by eight blocks 1121 - 1128 of encoded audio data.
  • the header 1120 of each audio frame 1110 comprises five fields.
  • the first is a host time field 1130 .
  • This four-byte field indicates the host clock time corresponding to the audio frame.
  • the host time field 1130 allows the client to, for example, match the audio frame to the corresponding video frame.
  • the second field in the frame header 1120 is a one-byte bit depth field 1132 .
  • the bit depth field 1132 is followed by a two-byte frame size field 1134 .
  • the frame size field 1134 communicates the length of the audio frame to the client.
  • the last two fields in the frame header 1120 contain decoder variables that correspond to the method used to encode the audio frames. These fields include a two-byte LD field 1136 and a one-byte SD field 1138 .
  • the LD and SD fields 1136 , 1138 are algorithm specific variables used with the 2-bit and 4-bit ADPCM audio encoders discussed above with reference to FIG. 5A .
  • Each block 1121 - 1128 in the audio frame 1110 contains a silence map 1140 and up to eight packets 1141 - 1148 of audio data.
  • the silence map 1140 is a one-byte field.
  • Each of eight silence bits in the silence map field 1140 corresponds to a packet of encoded audio data.
  • the information in the silence bits indicates whether or not the corresponding packet exists in that block 1121 - 1128 of the audio frame 1110 .
  • the silence map field 1140 may contain the following eight silence bits: 01010101, where 1 indicates a silent packet. This silence map field 1140 will be followed by only four packets of encoded audio data corresponding to silence map bits 1 , 3 , 5 and 7 .
  • each packet that is transmitted after the silence map 1140 consists of 32 samples of audio data.
  • FIG. 12 is a flow chart illustrating the encoding and generation of the audio frame for transmission to the client.
  • the encoding begins at step 1210 with the capture of 2048 audio samples from an audio source such as a microphone, CD player or other known sources. The samples are then digitized in packets of 32 samples each and groups the packets into blocks, each block containing eight packets (step 1215 ). A group of eight blocks then forms a frame.
  • the audio CGI 52 b determines whether the current packet is silent. If the packet is silent, at step 1230 , the silence bit in the silence map corresponding to the packet is set to 1. The data in the packet is not encoded, and the process continues to step 1260 . If, on the other hand, the packet is not silent, the corresponding silence bit is set to 0 (step 1240 ), and the data in the packet is encoded (step 1250 ). The process then continues to step 1260 .
  • the process determines whether the processed packet was the eighth and last packet of its block of data (step 1260 ). If the packet was not the last of its block, the process returns to step 1220 and processes the next packet of 32 samples. If the packet was the last of its block, the process writes the silence map and any non-silent packets into the block and proceeds to step 1270 .
  • the process determines whether the preceding block was the eighth and last block of the audio frame. If the block was not the last of the frame, the process returns to step 1220 to begin processing the next block by processing the next packet of 32 samples. If the block was the last of the audio frame, the process writes the audio frame by writing the header and the eight blocks. At step 1280 , the audio frame is transmitted to the client.
  • FIG. 13 is a block diagram illustrating the broadcast of the audio data by the host to clients and the flow of commands and information between components of the host and the client.
  • the audio broadcast begins when the client, via the remote user's web browser 1310 a, sends a request (indicated by line 1391 ) to the host server system 1320 .
  • the request is an HTTP request.
  • the server system 1320 sends (line 1392 ) a Jar to the client's web browser 1310 .
  • the Jar includes an applet that is launched by the client's web browser.
  • the applet then sends a request to the web server 1320 for the web server 1320 to launch a CGI (line 1393 ). Additionally, the applet causes the client to send client-specific parameters to the web server 1320 . In response to the request, the web server 1320 establishes a socket and launches a CGI 1330 according to the parameters supplied by the client and information associated with the socket (line 1394 ). The CGI 1330 submits periodic requests for audio sample information to an audio encoder 1350 (line 1395 ).
  • the audio encoder 1350 receives audio samples from an audio capture module 1340 and encodes the samples as described, for example, above with reference to FIG. 12 (line 1396 ).
  • the encoder 1350 responds to the periodic requests from the CGI 1330 by making the encoded audio information available to the CGI 1330 via, for example, shared memory (line 1395 ).
  • the audio encoder module 1350 audio CGI module 1330 may be sub-modules in the audio CGI 52 b shown in FIG. 1 .
  • the CGI 1330 transmits the encoded audio frames to the applet over the established socket (line 1397 ).
  • the applet decodes the encoded audio frames, providing audio to the user.
  • FIG. 14 is a flow chart of the function of the dynamic domain name system (DNS) updating process performed by the IP PROC module 60 illustrated in FIG. 1 .
  • the updating process begins when the host 10 (see FIG. 1 ) connects to a network 20 such as the Internet.
  • a network 20 such as the Internet.
  • IP Internet Protocol
  • the host 10 may connect to the Internet 20 through a service provider.
  • the updating process therefore, first checks to determine whether the current IP address is new (step 1410 ). If the IP address is unchanged, the process continues to step 1450 .
  • the process sends a request to a DNS host server 90 to update the IP address.
  • the DNS host server 90 updates the IP address corresponding to the requesting host in its database or in a DNS interface 92 of service provider affiliated with the host 10 (step 1440 ).
  • the process receives an update from the DNS host server 90 at step 1430 .
  • the process then proceeds to step 1450 .
  • the process is repeated at regular intervals, such as every 2 minutes, to keep the IP address in the DNS host server 90 updated.
  • the client 30 seeks to obtain data from a host 10
  • the client 30 is directed to the DNS host server 90 which uses the updated information to direct the client 30 to the proper host 10 .
  • the host 10 may specify a schedule to the DNS host server 90 .
  • the schedule may indicate when the host 10 is connected to the network 20 and is available to clients 30 . If the host 10 is not available, the DNS host server 90 can direct a client 30 to a web page providing the schedule and availability of the host 10 or other information. Alternatively, the DNS host server 90 can monitor when the host 10 is not connected to the network 20 . When the host 10 is not connected to the network 20 , the DNS host server 90 can direct a client 30 to a web page with an appropriate message or information.
  • FIG. 15 is a block diagram of a system for mirroring audio and video data streamed by the host.
  • a mirror computer 1510 is configured with a web server process 1520 to interface with clients 1530 .
  • the mirror computer 1510 launches a CGI process, nph-mirr 1540 , for each requesting client 1530 .
  • An AdMirror process 1550 running on the mirror computer 1510 coordinates the mirroring of one or more host 1560 .
  • the nph-mirr process 1540 corresponding to that client 1530 causes the AdMirror process 1550 to launch a Yowzer process 1570 for the specific host 1560 requested by the client 1530 .
  • the Yowzer process 1570 coordinates the connection of the mirror computer 1510 to the host 1560 and the streaming of the video and audio data from the host 1560 . If a Yowzer process 1570 already exists for the specific host 1560 , as may happen if the specific host 1560 has been previously requested by another client 1530 , an additional Yowzer process 1570 is not launched.
  • the AdMirror process 1550 then causes the Yowzer process 1570 corresponding to the requested host 1560 to interface with the nph-mirr process 1540 corresponding to the requesting client 1530 .
  • a single Yowzer process 1570 may support multiple nph-mirr 1540 processes and their corresponding clients 1530 .
  • Each nph-mirr process 1540 functions as, for example, the CGI 52 described above with reference to FIG. 1 , and coordinates streaming of data from the host 1560 to the client 1530 . Accordingly, the nph-mirr process 1540 sends an applet to the client 1530 and receives parameters related to the capabilities of the client 1530 and client's browser. Thus, the client 1530 receives streamed data at, for example, a frame rate that corresponds to its capability to process the frames.
  • the mirror computer 1510 assumes the responsibility of streaming the data to each of the clients 1530 . This frees the host 1550 to use its processing power for maintaining high video and audio stream rates.
  • the mirror computer 1510 may be a dedicated, powerful processor capable of accommodating numerous clients 1530 and numerous hosts 1550 .

Abstract

Video and audio signals are streamed to remote viewers that are connected to a communication network. A host server receives an originating video and audio signal that may arrive from a single source or from a plurality of independent sources. The host server provides any combination of the originating video and audio signals to viewers connected to a communication network. A viewer requests the host server provide a combination of video and audio signals from the host server. The host server transmits an instruction set to be executed by the viewer. The instruction set causes the viewer to transmit parameters to the host user, including parameters relating to the processing capabilities of the viewer. The host server then transmits multimedia data to the viewer according to the received parameters. A plurality of viewers may be simultaneously connected to the host server. Each of the plurality of viewers may configure the received video and audio signals independent of any other viewer and may generate alerts based on the video and audio content.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of, and incorporates by reference in its entirety, U.S. patent application Ser. No. 11/515,255, filed on Sep. 1, 2006 and patented U.S. application Ser. No. 09/652,113, filed on Aug. 29, 2000, U.S. Pat. No. 7,103,668, patented on Sep. 5, 2006.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The invention relates to devices and systems for communicating over a network. More particularly, the invention relates to a method and apparatus for streaming a multimedia signal to remote viewers connected to a communication network.
  • 2. Description of the Related Art
  • The constantly increasing processing power available in hardware devices such as personal computers, personal digital assistants, wireless phones and other consumer devices allows highly complex functions to be performed within the device. The hardware devices can perform complex calculations in order to implement functions such as spreadsheets, word processing, database management, data input and data output. Common forms of data output include video and audio output.
  • Personal computers, personal digital assistants and wireless phones commonly incorporate displays and speakers in order to provide video and audio output. A personal computer incorporates a monitor as the display terminal. The monitor, or display, on most personal computers can be configured independently of the processor to allow varying levels of resolution. The display for personal computers is typically capable of very high resolution, even on laptop-style computers,
  • In contrast, displays are permanently integrated into personal digital assistants and wireless phones. An electronic device having a dedicated display device formats data for display using dedicated hardware. The processing capabilities of the hardware as well as the display capabilities limit the amount of information displayed and the quality of the display to levels below that typically available from a personal computer. The lower quality is defined as fewer pixels per inch, the inability to display colors or a smaller viewing area.
  • A personal computer may integrate one of a number of hardware interfaces in order to display video output on a monitor. A modular video card or a set of video interface Integrated Circuits (IC's) is used by the personal computer to generate the digital signals required to generate an image on the monitor. The digital signals used by a computer monitor differ from the analog composite video signal used in a television monitor. However, the personal computer may incorporate dedicated hardware, such as a video capture card, to translate analog composite video signals into the digital signals required to generate an image on the monitor. Thus, the personal computer may display, on the monitor, video images captured using a video camera, or video images output from a video source such as a video tape recorder, digital video disk player, laser disk player, or cable television converter.
  • The video capture card, or equivalent hardware, also allows the personal computer to save individual video frames provided from a video source. The individual video frames may be saved in any file format recognized as a standard for images. A common graphic image format is the Joint Photographic Experts Group (JPEG) format that is defined in International Organization for Standardization (ISO) standard ISO-10918 titled DIGITAL COMPRESSION AND CODING OF CONTINUOUS-TONE STILL IMAGES. The JPEG standard allows a user the opportunity to specify the quality of the stored image. The highest quality image results in the largest file, and typically, a trade off is made between image quality and file size. The personal computer can display a moving picture from a collection of JPEG encoded images by rapidly displaying the images sequentially, in much the same way that the individual frames of a movie are sequenced to simulate moving pictures.
  • The volumes of data and image files generated within any individual personal computer provide limited utility unless the files can be distributed. Files can be distributed among hardware devices in electronic form through mechanical means, such as by saving a file onto a portable medium and transferring the file from the portable medium (e.g., floppy disks) to another computer.
  • Such mechanical file transfers are not particularly efficient and may be limited by the capacity of the transfer medium. A more efficient method of transferring files between computers is by using some type of communication link. The most basic communication link is a hardwired connection between the two computers transferring information. However, information may also be transferred using a network of computers.
  • A computer may be connected to a local network where a number of processors are linked together using dedicated communication links. File transfer speed on a dedicated network is typically constrained by the speed of the communication hardware. The physical network is typically hardwired and capable of providing a large signal bandwidth.
  • More widespread remote networks may take advantage of existing infrastructure in order to provide the communication link between networked processors. One common configuration allows remote devices to connect to a network using telephone land lines. The communication link is the factor constraining data transfer speed where low bandwidth communication links such as telephone land lines are used as network connections.
  • One well known public network that allows a variety of simultaneous communication links is the Internet. As used herein, “Internet” refers to a network or combination of networks spanning any geographical area, such as a local area network, wide area network, regional network, national network, and/or global network. As used herein, “Internet” may refer to hardwire networks, wireless networks, or a combination of hardwire and wireless networks. Hardwire networks may include, for example, fiber optic lines, cable lines, ISDN lines, copper lines, etc. Wireless networks may include, for example, cellular systems, personal communication services (PCS) systems, satellite communication systems, packet radio systems, and mobile broadband systems.
  • Individual computers may connect to the Internet using communication links having vastly differing information bandwidths. The fastest connections to the network use fiber connections directly to the network “backbone”. Connections to the network having a lower information bandwidth use E1 or T1 telephone line connections to a fiber link. Of course, the cost of the communication link is proportional to the available information bandwidth.
  • Network connections are not limited to computers. Any hardware device capable of data communication may be connected to a network. Personal digital assistants as well as wireless phones typically incorporate the ability to connect to networks in order to exchange data. Hardware devices often incorporate the hardware or software required to allow the device to communicate over the Internet. Thus, the Internet operates as a network to allow data transfer between computers, network-enabled wireless phones, and personal digital assistants.
  • One potential use of networks is the transfer of graphic images and audio data from a host to a number of remote viewers. As discussed above, a computer can store a number of captured graphic images and audio data within its memory. These files can then be distributed over the network to any number of viewers. The host can provide a simulation of real-time video by capturing successive video frames from a source, digitizing the video signal, and providing access to the files. A viewer can then download and display the successive files. The viewer can effectively display real-time streaming video where the host continually captures, digitizes, and provides files based on a real-time video source.
  • The distribution of captured real-time video signals over a network presents several problems. For example, there is no flexibility in the distribution of files to various users. A host captures the video and audio signals and generates files associated with each type of signal. As previously discussed, graphic images are commonly stored as JPEG encoded images. The use of JPEG encoding can compress the size of the graphic image file but, depending on the graphic resolution selected by the host, the image file may still be very large. The network connection at the host is an initial bottleneck to efficient file transfer. If the host sends files to the network using only a phone modem connection to transfer multiple megabyte files, no viewer will be able to display the video and audio signals in a manner resembling real-time streaming video.
  • The viewer's network connection becomes another data transfer bottleneck even if the host can send files to the network instantaneously. A viewer with a phone modem connection will not be able to transfer high-resolution images at a speed sufficient to support real-time streaming video.
  • One option is for the host to capture and encode any images in the lowest possible resolution to allow even the slowest connection to view real-time streaming video. However, the effect of capturing low-resolution images to enable the most primitive system's access to the images is to degrade the performance of a majority of viewers. Additionally, the images may need to be saved in such a low resolution that all detail is lost from the images. Degradation of the images, therefore, is not a plausible solution.
  • Another problem encountered is the inability of all users to support the same graphical image format selected by the host. Most personal computers are able to support the JPEG image format; however, network-enabled wireless phones or personal digital assistants may not be able to interpret the JPEG image format. Additionally, the less sophisticated hardware devices may not incorporate color displays. Access to video images should be provided to these users as well.
  • Finally, in such video distribution systems, the viewer has no control over the images. The viewer must rely solely on the host to provide a formatted and sized image having the proper view, resolution, and image settings. The viewer cannot adjust the image being displayed, the image resolution, or the image settings such as brightness, contrast and color. Further, the viewer is unable to control such parameters as compression of the transmitted data and the frame rate of video transmission.
  • SUMMARY OF THE INVENTION
  • The present invention is directed to an apparatus and method of transferring video and/or audio data to viewers such that the viewers can effectively display real-time streaming video output and continuous audio output. The apparatus and method may adapt the streaming video to each viewer such that system performance is not degraded by the presence of viewers having slow connections or by the presence of viewers having different hardware devices. The apparatus and method can further provide a level of image control to the viewer where each viewer can independently control the images received.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The features, objectives, and advantages of the invention will become apparent from the detailed description set forth below when taken in conjunction with the drawings, wherein like parts are identified with like reference numerals throughout, and wherein:
  • FIG. 1 is a block diagram of one embodiment of a multimedia distribution system.
  • FIG. 2 is an overview of the main program shown in FIG. 1.
  • FIG. 3 is a block diagram of a personal computer implementing the host process.
  • FIG. 4A is a diagram illustrating the video capture module.
  • FIG. 4B is a flow chart illustrating the function of the switching system.
  • FIG. 5A is a block diagram of a multimedia distribution module wherein the host operates as a server.
  • FIG. 5B is a block diagram illustrating the broadcast of video data by a web server.
  • FIG. 6 is a block diagram of a video stream format.
  • FIG. 7 is a block diagram of various video block formats.
  • FIG. 8 is a flow chart illustrating motion detection at a block level.
  • FIG. 9 is a flow chart illustrating motion detection at a frame level.
  • FIG. 10 is a flow chart illustrating a method of transmitting only those video image blocks that change.
  • FIG. 11 is a block diagram of an audio stream format.
  • FIG. 12 is a flow chart illustrating the encoding and generation of an audio frame.
  • FIG. 13 is a block diagram illustrating the broadcast of audio data by a web server.
  • FIG. 14 is a flow chart illustrating the dynamic updating of the domain name system.
  • FIG. 15 is a block diagram of a system for mirroring audio and video data.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • As used herein, a computer, including one or more computers comprising a web server, may be any microprocessor- or processor-controlled device or system that permits access to a network, including terminal devices, such as personal computers, workstations, servers, clients, mini computers, main-frame computers, laptop computers, a network of individual computers, mobile computers, palm-top computers, hand-held computers, set top boxes for a television, interactive televisions, interactive kiosks, personal digital assistants, interactive wireless communications devices, mobile browsers, or a combination thereof. The computers may further possess input devices such as a keyboard, mouse, touchpad, joystick, pen-input-pad, and output devices such as a computer screen and a speaker.
  • These computers may be uni-processor or multi-processor machines. Additionally, these computers include an addressable storage medium or computer accessible medium, such as random access memory (RAM), an electronically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), hard disks, floppy disks, laser disk players, digital video devices, compact disks, video tapes, audio tapes, magnetic recording tracks, electronic networks, and other techniques to transmit or store electronic content such as, by way of example, programs and data. In one embodiment, the computers are equipped with a network communication device such as a network interface card, a modem, or other network connection device suitable for connecting to a networked communication medium.
  • Furthermore, the computers execute an appropriate operating system such as Linux, Unix, Microsoft® Windows®, Apple® MacOS®, and IBM® OS/2®. As is convention, the appropriate operating system includes a communications protocol implementation which handles all incoming and outgoing message traffic passed over a network. In other embodiments, while different computers may employ different operating systems, the operating system will continue to provide the appropriate communications protocols necessary to establish communication links with a network.
  • The computers may advantageously contain program logic, or other substrate configuration representing data and instructions, which cause the computer to operate in a specific and predefined manner as described herein. In one embodiment, the program logic may advantageously be implemented as one or more modules.
  • As can be appreciated by one of ordinary skill in the art, each of the modules may comprise various sub-routines, procedures, definitional statements and macros. Each of the modules is typically separately compiled and linked into a single executable program. Therefore, the description of each of the modules in this disclosure is used for convenience to describe the functionality of the preferred system. Thus, the processes that are performed by each of the modules may be arbitrarily redistributed to one of the other modules, combined together in a single module, or made available in, for example, a shareable dynamic link library.
  • The modules may advantageously be configured to reside on the addressable storage medium and configured to execute on one or more processors. The modules include, but are not limited to, software or hardware components which perform certain tasks. Thus, a module may include, by way of example, components, such as, software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, Java byte codes, circuitry, data, databases, data structures, tables, arrays, and variables.
  • As used herein, multimedia refers to data in any form. For example, it may include video frames, audio blocks, text data, or any other data or information. Multimedia information may include any individual form or any combination of the various forms.
  • A block diagram of a multimedia distribution system according to aspects of the invention is shown in FIG. 1. The system is composed of a host 10 interfaced through, for example, a network 20 to at least one client 30. The host 10 is a computer including one or more processes or modules and may interface with various hardware devices on the computer. A process or module may be a set of instructions implemented in software, firmware or hardware, including any type of programmed step undertaken by components of the system. The client 30 is another computer including one or more process or modules. Advantageously, the client 30 is a remote computer interconnected to the host 10 through a network 20. The network 20 is any type of communication network as is commonly known by one skilled in the field and as was described previously. The network 20 may be a Local Area Network (LAN), a Wide Area Network (WAN), a public network such as the Internet, or a wireless network or any combination of such networks. The network 20 interconnection between the host 10 and the client 30 may be accomplished using hard wired lines or through wireless Radio Frequency (RF) links. The various embodiments of the invention are not limited by the interconnection method used in the network 20 or the physical location of the host 10 or clients 30.
  • A number of processes operate within the host 10 in order to allow the host 10 to interface with external devices 80 and with the client 30 through the network 20. One or more capture devices 42 interface with external devices 80 in order to transform the data provided by an external device 80 into a format usable by the host 10. In one embodiment, the capture device 42 is a video capture card that interfaces to an external video source. The video source may be generated by a video camera, video disc player, video cassette recorder, television video output, or any other device capable of generating a video source. The video capture card grabs the frames from the video source, converts them to digital signals, and formats the digital signals into a format usable by the host 10. The external device 80 may also be a video card within a computer for converting video signals that are routed to a monitor into a format usable by the host 10.
  • The external devices 80 are not limited to video sources and can include devices or sources of data of interest. For example, the external devices 80 may generate audio data. The capture device 42 interfaces with an audio source to convert the input signal to a digital signal, then to convert the digital signals into a format usable by the host 10. A variety of external devices 80 may be used to provide an audio signal. An audio signal may be provided from a microphone, a radio, a compact disc player, television audio output, or any other audio source.
  • Multiple external devices 80 may interface with the host 10. The external devices 80 may provide inputs to the host 10 simultaneously, sequentially, or in some combination. A switcher module 44 is used where there is a controllable switch (not shown) that is used to multiplex signals from multiple sources to a single capture device 42. The switcher 44 is used where multiple sources are controlled and is omitted if the host 10 does not have control over the selection of the source. If used, the switcher 44 receives control information through a communication port on the computer. An exemplary embodiment of a hardware switch used to multiplex multiple video sources to a single video capture card is provided in copending U.S. patent application Ser. No. 09/439,853, filed Nov. 12, 1999, entitled SIGNAL SWITCHING DEVICE AND METHOD, assigned to the assignee of the current application, and hereby incorporated herein by reference. A similar hardware switch may be used to multiplex multiple audio sources to a single audio capture card.
  • A multimedia operating system module 49 allows the capture devices to interface with one or more capture modules 40 a, 40 b. The capture modules 40 a, 40 b monitor the capture devices and respond to requests for images by transmitting the captured information in JPEG-encoded format, for example, to the main program module 46.
  • The host also includes a web server module 50, such as the Apache web server available from the Apache Software Foundation. The web server 50 is used to configure the host 10 as a web server. The web server 50 interfaces the host 10 with the various clients 30 through the network 20. The web server 50 sets up an initial connection to the client 30 following a client request. One or more Common Gateway Interfaces (CGI) 52 a, 52 b are launched for each client 30 by the web server module 50. Each CGI 52 submits periodic requests to the main program 46 for updated video frames or audio blocks. The web server 50 also configures the dedicated CGI 52 adapted to the capabilities of each client 30. The client 30 may monitor the connection and maintains some control over the information sent through the CGI 52. The client 30 can cause the web server 50 to launch a “set param” CGI module 54 to change connection parameters. The web server 50 conveys the control information to the other host processes through the “set param” CGI 54. Once the web server 50 establishes the network connection, the CGI 52 controls the information flow to the client 30.
  • The client 30 interfaces to the host through the network 20 using an interface module such as a browser 32. Commercially available browsers include Netscape Navigator and Microsoft's Internet Explorer. The browser 32 implements the communication formatting and protocol necessary for communication over the network 20. The client 30 is typically capable of two-way communications with the host 10. The two-way link allows the client 30 to send information as well as receive information. A TCP/IP socket operating system module 59 running on the host 10 allows the host to establish sockets for communication between the host 10 and the client 30.
  • The host 10 may also incorporate other modules not directly allocated to establishing communications to the client 30. For example, an IP PROC 60 may be included within the host 10 when the host 10 is configured to operate over, for example, the Internet. The IP PROC 60 is used to communicate the host's 10 Internet Protocol (IP) address. The IP PROC 60 is particularly useful when the host's IP address is dynamic and changes each time the host 10 initially connects to the network 20. In one embodiment, the IP PROC 60 at the host 10 works in conjunction with a Domain Name System (DNS) host server 90 (described in further detail below with reference to FIG. 14) connected to the network to allow clients 30 to locate and establish a connection to the host 10 even though the host 10 has a dynamic IP address.
  • An overview of the main program module 46 is provided in FIG. 2. The host implements a user interface 204 to receive input from the user through, for example, a keyboard or a mouse and to provide display and audio output to the user. The output may be in the form of an operating window displayed on a monitor that provides the user with an image display and corresponding control menus that can be accessed using a keyboard, a mouse or other user interface devices. A scheduler 210 operates simultaneously with the user interface 204 to control the operation of various modules. The user or an administrator of the host system may set up the scheduling of multimedia capture using the scheduler 210. Images or audio may be captured over particular time windows under the control of the scheduler 210 and those time windows can be selected or set by a user.
  • A licensing module 214 is used to either provide or deny the user access to specific features within the system. As is described in detail below, many features may be included in the system. The modularized design of the features allows independent control over user access to each feature. Independent control over user access allows the system to be tailored to the specific user's needs. A user can initially set up the minimum configuration required to support the basic system requirements and then later upgrade to additional features to provide system enhancements. Software licensing control allows the user access to additional features without requiring the user to install a new software version with the addition of each enhancement.
  • The host also performs subsystem control processes 220. The host oversees all of the subsystem processes that are integrated into the multimedia distribution system. These sub-processes include the multimedia capture system 230 that controls the capture of the video and audio images and the processing and formatting of the captured data. There may be numerous independent CGI processes running simultaneously depending on the number of clients connected to the host and the host's capacity. Each of the CGI processes accesses the network and provides output to the clients depending on the available captured data and the capabilities of the client.
  • A motion detection 240 process operates on the captured images to allow detection of motion over a sequence of the captured images. Motion detection can be performed on the entire image or may be limited to only a portion of the image. The operation of motion detection will be discussed in detail later.
  • Another process is an event response 250. The event response 250 process allows a number of predefined events to be configured as triggering events. In addition to motion detection, the triggering event may be the passage of time, detection of audio, a particular instant in time, user input, or any other event that the host process can detect. The triggering events cause a response to be generated. The particular response is configurable and may include generation and transmission of an email message, generation of an audio alert, capture and storage of a series of images or audio, execution of a particular routine, or any other configurable response or combination of responses.
  • Additional processes include an FTP process 260 and an IP Updater process 270. As discussed with reference to FIG. 1, the FTP process transfers the multimedia data to an FTP server to allow widespread access to the data. The IP Updater 270 operates to update the IP address of the host. The host may be identified by a domain name that is easily remembered. The domain name corresponds to an Internet Protocol address, but the host process may be connected to a network that utilizes dynamic IP addresses. The IP address of the server may change each time the host disconnects and reconnects to the network if dynamic IP addresses are used. The IP Updater 270 operates in conjunction with a Domain Name System (DNS) server to continually update the IP address of the host such that the host's domain name will always correspond to the appropriate IP address.
  • An example of a computer on which the host process resides is illustrated schematically in FIG. 3. The block diagram of FIG. 3 shows the host implemented on a personal computer 300. The host process is stored as a collection of instructions that are stored in the personal computer 300. The instructions may be stored in memory 304, such as Read-Only Memory (ROM) or Random Access Memory (RAM), a hard disk 306, a floppy disk to be used in conjunction with a floppy disk drive 308, or a combination of storage devices. The instructions are executed in the Central Processing Unit (CPU) 302 and are accessed through a bus 360 coupling the storage devices 304, 306, 308 to the CPU 302. The bus 360 can include at least one address bus and one data bus, although multiple buses may also be used. User input is coupled to the personal computer 300 through a keyboard 310, a mouse 312 or other user input device. Images are displayed to the user through a monitor 314 that receives signals from a video controller 316.
  • Video images are provided to the personal computer 300 from external video sources coupled to a video capture card 320. Although any video source may be used, a camera 322 and VCR 324 are shown in FIG. 3. A video switching system 330 is used to multiplex multiple video sources to a single video capture card 320. The video switching system 330 is controlled through a serial device controller 340. The host process controls which video source is used to supply the input by controlling the video switching system 330. The video switching system 330 is described further in the patent application previously incorporated by reference and is described below with reference to FIG. 4B.
  • Similarly, external audio sources are used to provide audio input to the personal computer 300. A microphone 352 and CD player 354 are shown as the external audio sources, although any audio source may be used. Audio is coupled from the external audio sources 352, 354 to the host process using an audio card 350.
  • The connection from the host to the network is made using a Network Interface Card (NIC) 360. The NIC 360 is an Ethernet card, but may be substituted with, for example, a telephone modem, a cable modem, a wireless modem or any other network interface.
  • FIG. 4A is a diagram illustrating a process for video capture using an apparatus such as that shown in FIG. 3. A video signal is generated in at least one video source 410. One video source may be used or a plurality of video sources may be used. A video switching system 330 is used when a plurality of video sources 410 is present. Each video source is connected to an input port of the video switching system 330. The video switching system 330 routes one of the plurality of input video signals to the video capture hardware 320 depending on the control settings provided to the video switching system 330 through a serial communications 340 link from the switcher 44 (see FIG. 1).
  • Video sources such as a VCR, TV tuner, or video camera generate composite video signals. The video capture hardware 320 captures a single video frame and digitizes it when the video switching system 330 routes a video source outputting composite video signals to the video capture hardware 320. The system captures an image using an Application Program Interface (API) 420, such as Video for Windows available from Microsoft Corp. The API transmits the captured image to the video capture module 430.
  • FIG. 4B is a flow chart illustrating the function of the video switching module 330 shown in FIGS. 3 and 4A. The video subsystem maintains a cache of time stamped, video images for each video-input source. Requests for data are placed on a queue in the serial communications module 340. When the video switching module 330 receives a request from the queue (step 452), it first determines whether the requested image is available (step 454). The requested image may be unavailable if, for example, the image is in the process of being captured. If the image is not available, the process returns to step 452 and attempts to process the request again at step 454. If the requested image is available, the switching module 330 determines whether the image already exists in the cache (step 456). If the image exists in the cache, the switching module 330 sends the image to the requesting CGI 52 a, 52 b (see FIG. 1) and removes the request from the queue (step 468). If the image does not exist in the cache, the switching module 330 proceeds to obtain the image. First, it determines whether the switcher is set to the source of the requested image (step 458). If the switcher is set to the proper source, the image is captured and placed in the cache (step 466). The image is then sent to the requesting CGI and the request is removed from the CGI (step 468). If the switcher is not set to the proper source, the switching module 330 causes a command to be sent to the switcher to switch to the source of the requested image (460). Next, depending on the video source and the capture device, optional operations may be performed to empty pipelines in the capture device's hardware or driver implementation (step 462). This is determined via test and interaction with the device during installation. The switching module 330 then waits a predetermined length of time (step 464). This delay allows the video capture device to synchronize with the new video input stream. The requested image is then captured and placed in the cache (step 466). The image is then sent to the requesting CGI, and the request is removed from the queue (step 468). Once the request has been removed, the switching module 330 returns to the queue to process the next request. Although the above description relates to the switching of video inputs, it may also apply to any switching module including, for example, the multimedia switcher 44 illustrated in FIG. 1.
  • Audio signals are captured in a process (not shown) similar to video capture. Audio sources are connected to multimedia audio hardware in the personal computer. The audio capture module makes periodic requests through an API such as Windows Multimedia, available from Microsoft Corp., for audio samples and makes the data available as a continuous audio stream.
  • The host 10 (see FIG. 1) distributes the multimedia data to requesting clients once the multimedia data has been captured. As noted above, the host is configured as a web server 50 in order to allow connections by numerous clients runs the host multimedia distribution application.
  • The client 30 can be a remote hardware system that is also connected to the network. The client may be configured to run a Java-enabled browser. The term “browser” is used to indicate an application that provides a user interface to the network, particularly if the network is the World Wide Web. The browser allows the user to look at and interact with the information provided on the World Wide Web. A variety of commercially available browsers are available for computers. Similarly, compact browsers are available for use in portable devices such as wireless phones and personal digital assistants. The features available in the browser may be limited by the available processing, memory, and display capabilities of the hardware device running the browser.
  • Java is a programming language developed especially for writing client/server and networked applications. A Java applet is commonly sent to users connected to a particular web site. The Java archive, or Jar, format represents a compressed format for sending Java applets. In a Jar file, instructions contained in the Java applet are compressed to enable faster delivery across a network connection. A client running a Java-enabled browser can connect to the server and request multimedia images.
  • Wireless devices may implement browsers using the Wireless Application Protocol (WAP) or other wireless modes. WAP is a specification for a set of communication protocols to standardize the way that wireless devices, such as wireless phones and radio transceivers, are used for Internet access.
  • Referring to FIGS. 1 and 5A, a client 30 initially connecting via the network 20 to the host makes a web request, or Type I request 512, while logged on a website. As used herein, the term “website” refers to one or more interrelated web page files and other files and programs on one or more web servers. The files and programs are accessible over a computer network, such as the Internet, by sending a hypertext transfer protocol (HTTP) request specifying a uniform resource locator (URL) that identifies the location of one of the web page files. The files and programs may be owned, managed or authorized by a single business entity or an individual. Such files and programs can include, for example, hypertext markup language (HTML) files, common gateway interface (CGI) files, and Java applications.
  • As used herein, a “web page” comprises that which is presented by a standard web browser in response to an HTTP request specifying the URL by which the web page file is identified. A web page can include, for example, text, images, sound, video, and animation.
  • The server performs Type I processing 510 in response to the Type I request 512 from the client. In Type I processing, the server opens a communication socket, designated socket “a” in FIG. 5A, and sends a Jar to the client. The first communication socket, socket “a,” is closed once the Jar is sent to the client. The client then extracts the Jar and runs it as a video applet once the entire Jar arrives at the client system. Alternatively, the functionality of the video applet can be implemented by software or firmware at the client.
  • The video applet running on the client system makes a request to the server running on the host. The request specifies parameters necessary for activation of a Common Gateway Interface (CGI) necessary for multimedia distribution. The video applet request may supply CGI parameters for video source selection, frame rate, compression level, image resolution, image brightness, image contrast, image view, and other client configurable parameters. The specific parameters included in the request can be determined by which button or link was selected as part of the Type I request. The web page may offer a separate button or link for each of several classes of clients. These classes refer to the capability of clients to receive data in specific formats and at specific rates. For example, one button may correspond to a request for the data at a high video stream rate (30 frames per second) while another button corresponds to a request for the data in simple JPEG (single frame) format. Alternatively, the video applet can survey the capabilities of the client system and select appropriate parameters based upon the results of the survey, or the video applet can respond to user input.
  • The server receives the video applet request and, in response, establishes a communication port, denoted socket “b,” between the server and the client.
  • The server then launches a CGI using the parameters supplied by the video applet request and provides client access on socket “b.” The video CGI 530 established for the client then sends the formatted video image stream over the socket “b” connection to the video applet running on the client. The video applet running on the client receives the video images and produces images displayed at the client.
  • The applet may be configured to perform a traffic control function. For example, the client may have requested a high stream rate (e.g., 30 frames per second) but may be capable of processing or receiving only a lower rate (e.g., 10 frames per second). This reduced capability may be due, for example, to network transmission delays or to other applications running on the client requiring more system resources. Once a transmission buffer memory is filled, the server is unable to write further data. When the applet detects this backup, it submits a request to the server for a reduced stream rate. This request for change is submitted via, for example, a “set parameter” CGI 570, or a frame rate CGI, which is described in further detail below with reference to FIG. 5B.
  • To detect a backup, the applet can compare a timestamp embedded in each frame (described below with reference to FIG. 6) with the client's internal clock, for example. By detecting a change in the relative time between consecutive frames, the applet is able to recognize the backup and skip processing of delayed frames. Thus, the client proceeds to process the current frame rather than an old frame. For example, if the client receives 30 frames per second and can only process one frame per second, the applet will cause the client to process the first frame, skip the next 29 frames and process the 31st frame.
  • The client can also select to view only a portion of the image. For example, the client may select a region of the image that he wishes to magnify. The applet allows the client to submit a request to the CGI to transmit only blocks corresponding to the selected region. By selecting only the selected blocks, the necessary bandwidth for transmission is further reduced. Thus, the client can zoom to any region of the captured image. As a further example, the client may submit a request, via the applet, to pan across the image in any direction, limited only by the boundaries of the captured image. The applet submits this request as a change in the requested region.
  • Each time a video frame or audio block is encoded in the server, it is available to be transmitted to the client. The video CGI 530 determines, according to the parameters passed by the video applet, whether to submit a request for an additional video frame and whether to send the additional information to the client.
  • A similar audio CGI 560 is established using an audio applet running on the client. Each time an audio block is encoded at the server, it is available to be transmitted to the client. The audio CGI 560 transmits the audio information to the client as a continuous stream.
  • The applet may be configured to perform an audio traffic control function similar to that described above with respect to the video CGI 530. For example, the client may have initially requested an 8-bit audio stream but may be capable of only handling a 4-bit or a 2-bit stream.
  • 2-bit and 4-bit audio streams are encoded based on adaptive pulse code modulation encoding (ADPCM) as described by Dialogic Corporation. The 4-bit audio samples are generated from 16-bit audio samples at a fixed rate. The 2-bit audio encoder modifies the standard ADPCM by removing the two lowest step bits, resulting in 2-bit samples from the original 16-bit data. An 8-bit stream is generated by converting 16-bit samples into 8-bits using a μ-law encoder which is utilized in the Sun Microsystems, Inc. audio file format. This encoder is defined as the ITU-T standard G.711.
  • When the applet detects a discrepancy between the transmitted audio data and the capabilities of the client, it submits a request for change to the server. The audio CGI 560 then closes the audio stream and reopens it at the appropriate data rate.
  • As noted above, the client determines the type of CGI that controls the information flowing to it on socket b by making the appropriate request. In the case of a JPEG Push CGI 540 or a Wireless Access Protocol (WAP) CGI 550, no applet is involved and no socket “b” is established. For example, if the client is an Internet-enabled wireless device utilizing a WAP browser, a video CGI 530 is not set up. Instead, a WAP-enabled device requests a WAP CGI 550 to be set up at the server. Video frames are then routed to the WAP-enabled device using the WAP CGI in lieu of the video CGI 530 via socket “a”. The video frames are routed to the client as JPEG files. Similarly, a JPEG Push CGI 540 is set up at the server if the client requests JPEG Push. In response to a request by a client, the web server 510 establishes a separate socket b connection to the server and utilizes a separate CGI that is appropriate for its capabilities, for that particular client.
  • An additional CGI that utilizes a socket is the “set parameter” CGI 570. A client may revise the parameters that control the received images and audio by adjusting controls that are available on the video applet. When the client requests a change in parameters the “set parameter” CGI 570 is launched to change the parameters at the server. It can be seen that each individual client may change the CGI settings associated with that particular client without affecting the images or audio being sent to any other client. Thus, each individual client has control over its received multimedia without affecting the capture process running on the server system.
  • FIG. 5B is a block diagram illustrating the streaming of the video data by the host to clients and the flow of commands and information between components of the host and the client. The video streaming begins when the client, via the remote user's web browser 505 a, sends a request (indicated by line 581) to the host server system 510. In one embodiment, the request is an HTTP request. In response to the request, the server system 510 sends (line 582) a Jar to the client's web browser 505. The Jar includes an applet that is launched by the client's web browser 505. Although FIG. 5B indicates the web browser 505 as having two blocks 505 a, 505 b, it is understood that the two blocks 505 a, 505 b only illustrate the same browser before and after the launching of the applet, respectively. Among other functions, the applet then sends a request to the web server 510 for the web server 510 to launch a CGI (line 583). Additionally, the applet causes the client to send client-specific parameters to the web server 510. In response to the request, the web server 510 establishes a socket and launches a CGI 530 according to the parameters supplied by the client and information associated with the socket (line 584). The CGI 530 submits periodic requests for video information to a video encoder 525 (line 585). The video encoder 525 receives JPEG-encoded video data from a video capture module 515 and formats the data for streaming as described, for example, below with reference to FIGS. 6 and 7 (line 586). The encoder 525 responds to the requests from the CGI 530 by transmitting the encoded video information to the CGI 530 (line 585). The video encoder module 525 and the video CGI module 530 may be sub-modules in the video CGI 52 a shown in FIG. 1. The CGI 530 transmits the encoded video frames to the applet over the established socket (line 587). The applet decodes the encoded audio frames, providing audio to the user.
  • As noted above, the applet may be configured to perform a traffic control function. When the applet is launched on the remote viewer's browser 505 b, it launches a frame-rate monitoring thread 535 (line 591). The thread 535 monitors the video stream for frame delays (step 545) by, for example, comparing time stamps of video frames with the client's internal clock, as described above. As indicated in FIG. 5B, the video applet continuously checks for frame delays (line 593). When a frame delay is detected (line 594), the applet requests that the web server 510 launch a frame-rate CGI 555. The request also submits parameters to indicate the frame rate capabilities of the client. The parameters are submitted to the video CGI 530 (line 595) which changes the rate at which video is streamed to the user.
  • The video CGI compresses and formats the video images for streaming in order to reduce the required network bandwidth. The video applet running on the client extracts the video image from the compressed and encoded data. A block diagram of the video stream format is shown in FIG. 6. The video stream can be formatted in several ways with each format transmitting separate video image information. All video stream formats are comprised of a single six-byte header 602 followed by a number of video blocks 604 a-604 nn.
  • The six-block header 602 is made up of a one-byte error code 610, a one-byte source 612, and a four-byte connection ID 614. The one-byte error code 610 indicates whether an error is present in the transmission. A zero value error code 610 indicates a successful transmission follows. A non-zero error code indicates an error has been detected and no data blocks will follow. The non-zero error code 610, therefore, indicates the data stream is complete. The one-byte source 612 indicates the origin of the video image. A zero value source 612 indicates the host as the source of the video image. A one in the source 612 indicates the image is coming from a mirror site. The use of a mirror site is discussed in detail below. Use of a mirror site is not otherwise detectable by the client and does not degrade the image received at the client. The four-byte connection ID 614 is used to designate the specific client. The connection ID 614 is an identifier that is unique to each connected user.
  • A series of video blocks 604 follow the header 602. Different video block formats are used to transmit different size video images. However, all video block formats utilize a structure having a four-byte frame size field 620 followed by a four-byte block type field 622, followed by block data fields 624.
  • A first type of video block 604 is defined as block type N, where N represents a positive integer defining the number of image segments encoded in the block. A block type N format utilizes a data triplet to define each of N video segments. Each of the N data triplets contains a four-byte X position field 632, a four-byte Y position field 634, and a four-byte width field 636. The X and Y positions define the location of the segment on the client screen. The width field 636 defines the width of the video segment. The height of the video segment for the block type N video format is preset at sixteen pixels. Thus, each of the data triplets defines a video stripe image that is displayed on the client screen. Following the N data triplets, the block type N video format utilizes a series of data blocks. A four-byte data offset field 640 is used to facilitate faster transmission of data by not transmitting identical bytes of data at the beginning of each image. For example, two consecutive images may have the identical first 600 bytes of data. The data offset field 640 will be set to 600 and will prevent retransmission of those 600 bytes.
  • A Data Size (DS) field 642 follows the data offset field 640 and is used to define the size of the data field that follows. Two four-byte timestamp fields 644, 646 follow the DS field 642. The first timestamp field 644 is used to timestamp the video image contained in the block type N image. The timestamp 644 may be used to update a timestamp that is displayed at the client. The second timestamp field 646 is used to synchronize the video stream with an audio stream. The contents of the DS field 642 define the number of data bytes in the data field 648 that follows the timestamp fields 644 and 646. The information in the data field 648 is JPEG encoded to compress the video image. Thus, each data triplet defines the location and width of a JPEG encoded video image stripe. The image is a single video stripe in the image when all of the segments are in the same Y coordinate. The initial segment 650 a is a sixteen-pixel-high segment having a width defined in the first data triplet. Similarly, subsequent segments 650 b-650 n are sixteen-pixel-high segments with widths defined by the width field 636 b-636 n of the corresponding triplet.
  • Another video block type is denoted block type-3 and is also known as a Single Block type. The structure of the Single Block is shown in FIG. 7. The Single Block format begins with a pair of four-byte data fields. The first four-byte data field provides the initial horizontal location, X0 710. The second four-byte block provides the initial vertical location, Y0 712. The coordinates X0 710 and Y0 712 define the upper left corner of the video image provided in the Single Block. A second pair of four-byte data fields follows the first pair. The second pair of data fields defines the lower right corner of the video image provided in the Single Block. The first data field in the second pair provides the final horizontal position, X 1 714, and the second data field in the pair provides the final vertical position, Y1 716. A four-byte Data Offset field 718 follows the two pairs of coordinates. A Data Size (DS) field 720 follows the Data Offset field 718 and is used to define the number of bytes in the data field 726. Immediately following the DS field 720 are two four- byte timestamp fields 722 and 724 to identify the time the video image was generated. The video applet running on the client can extract the timestamp information in order to overlay a timestamp on the image. The Single Block is completed with a data field 726 consisting of the number of data blocks defined in the DS field 720. Thus, the Single Block type defines a rectangular video image spanning the coordinates (X0, Y0)-(X1, Y1).
  • Block type-4, also designated a Synchronization Frame, has a data format identical to that of the above-described Single Block. In the Synchronization Frame, the initial horizontal and vertical coordinates, X0 and Y0, are set to zero. Setting the initial coordinates to zero aligns the upper left corner of the new image with the upper left corner of the existing image. The final horizontal and vertical coordinates in the Synchronization Frame correspond to the width of the whole image and the height of the whole image, respectively. Therefore, it can be seen that the Synchronization Frame can be used to refresh the entire image displayed at the client. The Synchronization Frame is used during the dynamic update of the video frame rate in order to limit transmission delays, as described above with reference to FIG. 5B.
  • Block type-1 does not contain any image data within it. Rather it is used to indicate a change in the transmitted image size. The block type-1 format consists of a four-byte data field containing the New Width 740, followed by a four-byte data field containing the New Height 742. The block type-1 information must be immediately followed by a full-image Single Block or Synchronization Frame.
  • Finally, block type-2 is designated the Error Block. The Error Block consists solely of a one-byte Error Code 750. The Error Block is used to indicate an error in the video stream. Transmission of the video stream is terminated following the Error Code 750.
  • Referring now to FIG. 8, motion detection which can be carried out by the host will be described. Once the image has been captured into a JPEG-encoded frame, for example, the contents of a frame can further be processed by the main program module 46 (see FIG. 1) as follows. Data from subsequent video frames can be compared to determine whether the frames capture motion. FIG. 8 shows a flow chart of the motion detection process. A JPEG-encoded frame is received from the video capture module 40 a by the main program module 46 (see FIG. 1). The frame is first subdivided into a grid of, for example, 16 blocks by 16 blocks in order to detect motion within sequential images (step 802). Motion can be detected in each individual block. The number of blocks used to subdivide the frame is determined by the precision with which motion detection is desired. A large number of blocks per frame increases the granularity and allows for fine motion detection but comes at a cost of processing time and increased false detection of motion due to, for example, jitter in the image created by the camera or minute changes in lighting. In contrast, a lower number of blocks per frame provides decreased resolution but allows fast image processing. Additionally, the frame may be the complete image transmitted to the clients or may be a subset of the complete image. In other words, motion detection may be performed on only a specific portion of the image. The host user may determine the size and placement of this portion within the complete image, or it may be predetermined.
  • Once the frame has been subdivided, each block in the grid is motion processed (referenced in FIG. 8 as 810). Motion processing is performed on each block using comparisons of the present image with the previous image. First, at step 812, a cross-correlation between the block being processed of the current image and the corresponding block of the previous image is calculated. In one embodiment, the cross-correlation includes converting the captured blocks to grayscale and using the gray values of each pixel as the cross-correlated variable. Alternatively, the variable used for cross-correlation may be related to other aspects of the image such as light frequency of pixels.
  • At step 814, the cross-correlation is then compared with a predetermined threshold. The predetermined cross-correlation threshold can be a static value used in the motion detection process or it can be dynamic. If the cross-correlation threshold is dynamic, it may be derived from the size of the blocks or may be set by the host user. The host user may set the cross-correlation threshold on a relative scale where the scale is relative to a range of acceptable cross-correlation values. Use of a relative scale allows the host user to set a cross-correlation threshold without having any knowledge of cross-correlation. It may be preferable for the cross-correlation threshold to be set higher when the block size is large. In contrast, a lower cross-correlation threshold may be preferable where the block size is small and there are not many pixels defining the block. In addition, the cross-correlation threshold can be set in accordance with the environment in which the system operates (e.g., outdoor versus indoor) and the particular use of the motion detection (e.g., detecting fast movement of large objects).
  • If, at step 814, the cross-correlation threshold is not exceeded (i.e., the blocks are sufficiently different), the process next calculates the variance in the brightness of the block over the corresponding block of the previous image (step 816). The variance is compared against a variance threshold at step 818. Again, the variance threshold may be static or dynamically determined. If the calculated variance falls below the variance threshold then no motion is indicated in the block, and the process continues to step 890. The block is not marked as one having motion. However, if the variance exceeds the variance threshold, the block is marked as having motion at step 820, and the process continues to step 890.
  • On the other hand, if the calculated cross-correlation is above the predetermined threshold at step 814 (i.e., blocks are sufficiently similar), then no motion has been detected, and the process continues to step 890. The block is not marked as one having motion. In an alternate embodiment, the brightness variance may be calculated and compared to a variance threshold. Thus, brightness variances alone may be sufficient to detect motion. However, to reduce the number of false positives, the preferred embodiment illustrated in FIG. 8 requires both a sufficient variance in brightness and in the cross-correlation variable.
  • At step 890, the routine checks to see if all blocks have been processed. If all blocks have been processed, the motion detection routine in the main program 46 terminates (step 899) and returns the results to the video capture module 40 a shown in FIG. 1. However, if not all blocks of the current image have been processed, the routine returns to motion processing (reference 810) to analyze the next block.
  • FIG. 9 shows a flow chart of the motion detection process performed by the main program 46 (see FIG. 1) on a frame level. Motion detection requires comparison of at least two frames, one of which is used as a reference frame. Initially, a first frame is captured and used as the reference frame for determining motion detection (step not shown in FIG. 9). The first step in detecting motion is capture of the current frame (step 902). Motion detection (step 800) on the block level, as described above with reference to FIG. 8, is performed on the captured frame using the initial frame as the reference. Following motion detection on the block level (step 800), the motion detection process calculates the fraction of blocks that have motion (step 910). The calculated fraction is compared against “low,” “medium,” and “high” thresholds. The thresholds may be static or dynamic as described above for the thresholds in the block motion detection process (step 800).
  • If, at step 920, the calculated fraction falls below the “low” threshold, then no motion has been detected in the frame, and the detection process proceeds to step 990. However, if the calculated fraction exceeds the lowest threshold then the fraction must lie within one of three other ranges, and the process continues to step 930.
  • At step 930, the calculated fraction is compared against the “medium” threshold. If the calculated fraction does not exceed the “medium” threshold (i.e., the fraction is in the low-medium range), the process continues to step 935. At step 935, the motion detection process performs “slight” responses. Slight responses may include transmitting a first email notification to an address determined by the host user, sounding an audible alert, originating a phone call to a first number determined by the host user, or initiating predetermined control of external hardware, such as alarms, sprinklers, or lights. Any programmable response may be associated with the slight responses, although advantageously, the lowest level of response is associated with the slight response. After performing the “slight” responses, the process continues to step 960.
  • If, at step 930, the calculated fraction exceeds the “medium” threshold, the process continues to step 940. At step 940, the calculated fraction is compared against the “high” threshold. If the calculated fraction does not exceed the “high” threshold (i.e., the fraction is in the medium-high range), the process continues to step 945. At step 945, the motion detection process performs moderate responses. Moderate responses may include any of the responses that are included in the slight responses. Advantageously, the moderate responses are associated with a higher level of response. A second email message may be transmitted indicating the detected motion lies within the second range, or a second predetermined phone message may be directed to a phone number determined by the host user. After performing the “moderate” responses, the process continues to step 960.
  • If, at step 940, the calculated fraction exceeds the “high” threshold (i.e., the fraction is in the high range), the process continues to step 950. At step 950, the motion detection process performs severe responses. Advantageously, the most extreme actions are associated with severe responses. The severe responses may include transmitting a third email message to a predetermined address, originating a phone call with a “severe” message to a predetermined phone number, originating a phone call to a predetermined emergency phone number, or controlling external hardware associated with severe responses. External hardware may include fire sprinklers, sirens, alarms, or emergency lights. After performing the “severe” responses, the process continues to step 960.
  • At step 960, the motion detection process logs the motion and the first twelve images having motion regardless of the type of response performed. The motion detection threshold is, in this manner, used as a trigger for the recording of images relating to the motion-triggering event. The images are time-stamped and correlate the motion triggering event with a time frame. Motion detection using this logging scheme is advantageously used in security systems or any system requiring image logging in conjunction with motion detection. The motion detection process is done 940 once the twelve motion images are recorded. The motion detection process may be part of a larger process such that the motion detection process repeats indefinitely. Alternatively, the motion detection process may run on a scheduled basis as determined by another process. Although the foregoing example utilizes low, medium and high thresholds, fewer or more thresholds can be used.
  • Additional advantages may be realized using block motion detection in conjunction with the different image encoding formats shown in FIG. 6 and FIG. 7. Transmitting a complete video image to a client requires a great deal of network bandwidth even though the image may be JPEG-encoded. The amount of network bandwidth required to transmit images to a client can be reduced by recognizing that subsequent data within an image remains the same for a majority of images. Only a small fraction of the image may include data not previously transmitted to the client in a previous image. The network bandwidth requirement can be reduced by transmitting only those images that change from image frame to image frame. The client is not aware that the entire image is not retransmitted each time because those blocks that are not retransmitted contain no new information.
  • A process for conserving network bandwidth by transmitting only changed image blocks is performed by the video CGI 52 a (see FIG. 1) and is shown in FIG. 10. The process begins by capturing an image (step 1010). The process then performs block motion detection 800 as described above with reference to FIG. 8. Additionally, at step 1020, the oldest blocks in the image, those unchanged after a predetermined number of image capture cycles, are marked as having changed even though they may remain the same. Marking the oldest blocks as having changed allows the image at the client to be refreshed over a period of time even though there may be no new information in the image frame. At step 1030, the route the process takes diverges depending on a chosen compression level. The level of compression may be preselected by the host. Alternatively, the host may offer the client a choice of compression levels. If low compression is selected, the process continues to step 1040, and the image to be transmitted to the client is set to the full image frame. The process then constructs the appropriate header (step 1042) and creates the JPEG image for the full image frame (step 1044). The process then proceeds to step 1090.
  • When medium compression is selected at step 1030, the process first finds the minimum region containing changed blocks (step 1050). The fraction of changed blocks in the minimum region is compared to a predetermined threshold at step 1052. If the fraction exceeds the predetermined threshold, the process constructs a header (step 1042), creates a JPEG image (step 1044), and proceeds to step 1090. On the other hand, if the fraction is less than the predetermined threshold at step 1052, the process continues to step 1060.
  • If high compression is selected at step 1030, the process continues to step 1060. At step 1060, the process constructs a header and stripe image for the changed blocks and the oldest unchanged blocks and proceeds to step 1065. At step 1065, the process creates a JPEG blocks for the stripe image and proceeds to step 1090. At step 1090, the data is transmitted to the client.
  • FIG. 11 is a block diagram of one format of an audio stream. The audio stream comprises a series of audio frames 1110 that are transmitted by the host in encoded form to the client. The encoding of an audio frame is described below with reference to FIG. 12. Additionally, the host also compresses the audio data to reduce the required bandwidth for transmission. Each audio frame 1110 has a header 1120 followed by eight blocks 1121-1128 of encoded audio data.
  • The header 1120 of each audio frame 1110 comprises five fields. The first is a host time field 1130. This four-byte field indicates the host clock time corresponding to the audio frame. The host time field 1130 allows the client to, for example, match the audio frame to the corresponding video frame. The second field in the frame header 1120 is a one-byte bit depth field 1132. The bit depth field 1132 is followed by a two-byte frame size field 1134. The frame size field 1134 communicates the length of the audio frame to the client. The last two fields in the frame header 1120 contain decoder variables that correspond to the method used to encode the audio frames. These fields include a two-byte LD field 1136 and a one-byte SD field 1138. The LD and SD fields 1136, 1138 are algorithm specific variables used with the 2-bit and 4-bit ADPCM audio encoders discussed above with reference to FIG. 5A.
  • Each block 1121-1128 in the audio frame 1110 contains a silence map 1140 and up to eight packets 1141-1148 of audio data. The silence map 1140 is a one-byte field. Each of eight silence bits in the silence map field 1140 corresponds to a packet of encoded audio data. The information in the silence bits indicates whether or not the corresponding packet exists in that block 1121-1128 of the audio frame 1110. For example, the silence map field 1140 may contain the following eight silence bits: 01010101, where 1 indicates a silent packet. This silence map field 1140 will be followed by only four packets of encoded audio data corresponding to silence map bits 1, 3, 5 and 7. If the corresponding packet does not exist (e.g., those corresponding to silence map bits 2, 4, 6 and 8 in the above example), the client will insert a silence packet with no audio data in its place. Thus, only packets with non-silent data must be transmitted, thereby reducing the required bandwidth. Each packet that is transmitted after the silence map 1140 consists of 32 samples of audio data.
  • FIG. 12 is a flow chart illustrating the encoding and generation of the audio frame for transmission to the client. The encoding begins at step 1210 with the capture of 2048 audio samples from an audio source such as a microphone, CD player or other known sources. The samples are then digitized in packets of 32 samples each and groups the packets into blocks, each block containing eight packets (step 1215). A group of eight blocks then forms a frame. At step 1220, the audio CGI 52 b (see FIG. 1) determines whether the current packet is silent. If the packet is silent, at step 1230, the silence bit in the silence map corresponding to the packet is set to 1. The data in the packet is not encoded, and the process continues to step 1260. If, on the other hand, the packet is not silent, the corresponding silence bit is set to 0 (step 1240), and the data in the packet is encoded (step 1250). The process then continues to step 1260.
  • After each packet is processed, the process determines whether the processed packet was the eighth and last packet of its block of data (step 1260). If the packet was not the last of its block, the process returns to step 1220 and processes the next packet of 32 samples. If the packet was the last of its block, the process writes the silence map and any non-silent packets into the block and proceeds to step 1270.
  • At step 1270, the process determines whether the preceding block was the eighth and last block of the audio frame. If the block was not the last of the frame, the process returns to step 1220 to begin processing the next block by processing the next packet of 32 samples. If the block was the last of the audio frame, the process writes the audio frame by writing the header and the eight blocks. At step 1280, the audio frame is transmitted to the client.
  • FIG. 13 is a block diagram illustrating the broadcast of the audio data by the host to clients and the flow of commands and information between components of the host and the client. The audio broadcast begins when the client, via the remote user's web browser 1310 a, sends a request (indicated by line 1391) to the host server system 1320. In one embodiment, the request is an HTTP request. In response to the request, the server system 1320 sends (line 1392) a Jar to the client's web browser 1310. The Jar includes an applet that is launched by the client's web browser. Although FIG. 13 indicates the web browser 1310 as having two blocks 1310 a, 1310 b, it is understood that the two blocks 1310 a, 1310 b only illustrate the same browser before and after the launching of the applet, respectively. Among other functions, the applet then sends a request to the web server 1320 for the web server 1320 to launch a CGI (line 1393). Additionally, the applet causes the client to send client-specific parameters to the web server 1320. In response to the request, the web server 1320 establishes a socket and launches a CGI 1330 according to the parameters supplied by the client and information associated with the socket (line 1394). The CGI 1330 submits periodic requests for audio sample information to an audio encoder 1350 (line 1395). The audio encoder 1350 receives audio samples from an audio capture module 1340 and encodes the samples as described, for example, above with reference to FIG. 12 (line 1396). The encoder 1350 responds to the periodic requests from the CGI 1330 by making the encoded audio information available to the CGI 1330 via, for example, shared memory (line 1395). The audio encoder module 1350 audio CGI module 1330 may be sub-modules in the audio CGI 52 b shown in FIG. 1. The CGI 1330 transmits the encoded audio frames to the applet over the established socket (line 1397). The applet decodes the encoded audio frames, providing audio to the user.
  • FIG. 14 is a flow chart of the function of the dynamic domain name system (DNS) updating process performed by the IP PROC module 60 illustrated in FIG. 1. The updating process begins when the host 10 (see FIG. 1) connects to a network 20 such as the Internet. When the host 10 connects to the network 20, it may be assigned a different Internet Protocol (IP) address from that which it was assigned during a previous connection. For example, the host 10 may connect to the Internet 20 through a service provider. The updating process, therefore, first checks to determine whether the current IP address is new (step 1410). If the IP address is unchanged, the process continues to step 1450. On the other hand, if the IP address is new, at step 1420, the process sends a request to a DNS host server 90 to update the IP address. The DNS host server 90 updates the IP address corresponding to the requesting host in its database or in a DNS interface 92 of service provider affiliated with the host 10 (step 1440). In response to the request, the process receives an update from the DNS host server 90 at step 1430. The process then proceeds to step 1450. The process is repeated at regular intervals, such as every 2 minutes, to keep the IP address in the DNS host server 90 updated. When a client 30 seeks to obtain data from a host 10, the client 30 is directed to the DNS host server 90 which uses the updated information to direct the client 30 to the proper host 10.
  • In a further embodiment, the host 10 may specify a schedule to the DNS host server 90. The schedule may indicate when the host 10 is connected to the network 20 and is available to clients 30. If the host 10 is not available, the DNS host server 90 can direct a client 30 to a web page providing the schedule and availability of the host 10 or other information. Alternatively, the DNS host server 90 can monitor when the host 10 is not connected to the network 20. When the host 10 is not connected to the network 20, the DNS host server 90 can direct a client 30 to a web page with an appropriate message or information.
  • FIG. 15 is a block diagram of a system for mirroring audio and video data streamed by the host. A mirror computer 1510 is configured with a web server process 1520 to interface with clients 1530. In response to requests from clients 1530 made to the web server process 1520, the mirror computer 1510 launches a CGI process, nph-mirr 1540, for each requesting client 1530. An AdMirror process 1550 running on the mirror computer 1510 coordinates the mirroring of one or more host 1560. When a client 1530 makes a request to the web server 1520 for a specific host 1560, the nph-mirr process 1540 corresponding to that client 1530 causes the AdMirror process 1550 to launch a Yowzer process 1570 for the specific host 1560 requested by the client 1530. The Yowzer process 1570 coordinates the connection of the mirror computer 1510 to the host 1560 and the streaming of the video and audio data from the host 1560. If a Yowzer process 1570 already exists for the specific host 1560, as may happen if the specific host 1560 has been previously requested by another client 1530, an additional Yowzer process 1570 is not launched. The AdMirror process 1550 then causes the Yowzer process 1570 corresponding to the requested host 1560 to interface with the nph-mirr process 1540 corresponding to the requesting client 1530. Thus, a single Yowzer process 1570 may support multiple nph-mirr 1540 processes and their corresponding clients 1530.
  • Each nph-mirr process 1540 functions as, for example, the CGI 52 described above with reference to FIG. 1, and coordinates streaming of data from the host 1560 to the client 1530. Accordingly, the nph-mirr process 1540 sends an applet to the client 1530 and receives parameters related to the capabilities of the client 1530 and client's browser. Thus, the client 1530 receives streamed data at, for example, a frame rate that corresponds to its capability to process the frames.
  • Thus, while the host 1550 streams data to the mirror computer 1510, the mirror computer 1510 assumes the responsibility of streaming the data to each of the clients 1530. This frees the host 1550 to use its processing power for maintaining high video and audio stream rates. The mirror computer 1510 may be a dedicated, powerful processor capable of accommodating numerous clients 1530 and numerous hosts 1550.
  • The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears, the invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiment is to be considered in all respects only as illustrative and not restrictive and the scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (17)

1. A method, implemented in hardware, of distributing media data to a client computer via a network from a server computer, the method comprising:
receiving a data request at a server computer from a client computer via a network;
receiving a client specific parameter from said client computer;
capturing a stream of multimedia data comprising a video signal using at least one device configured to capture said stream of multimedia data;
receiving at the server computer a selection of a region of said video signal to view on said client computer;
encoding said stream of multimedia data in response to said client specific parameter, wherein said encoding is based on said selected region of said video signal; and
streaming said encoded stream to said client computer from said server computer via the network according to said client specific parameter.
2. The method of claim 1, wherein said client computer specific parameter comprises data indicative of the processing capability of said client computer.
3. The method of claim 2, wherein streaming said encoded stream comprises streaming said encoded stream at a rate compatible with the processing capability of said client computer.
4. The method of claim 1, further comprising:
receiving a second client specific parameter from a second client computer;
encoding said stream of multimedia data in response to said second client specific parameter as a second encoded stream; and
streaming said second encoded stream to said second client computer from said server computer via the network according to said second client specific parameter.
5. The method of claim 4, wherein said second encoded stream is streamed to said second client computer while said encoded stream is streamed to said client computer at a different rate.
6. The method of claim 1, wherein said client computer specific parameter is selected from the group consisting of video source selection, audio source selection, audio and video source selection, frame rate, compression level, image resolution, image brightness, image contrast, and image view.
7. The method of claim 1, wherein said media data comprises at least one of audio data and video data.
8. The method of claim 1, further comprising:
launching a delay monitoring module on said client computer;
detecting a change in received stream rate at said client computer;
communicating a request to a server computer requesting an updated client specific parameter comprising a multimedia data stream rate based on the detected change.
9. The method of claim 8, wherein detecting a change in received stream rate comprises repeatedly detecting a change at a time interval.
10. The method of claim 8, wherein detecting a change in received stream rate comprises detecting a change in received stream rate that exceeds a threshold.
11. The method of claim 1, wherein encoding said stream of multimedia data based on the selected region of said video signal comprises panning said video signal.
12. A system for distributing media data, the system comprising:
at least one device configured to capture a stream of multimedia data, said stream comprising a video signal;
a server configured to receive said stream of multimedia data from said at least one device; and
a client computer configured to communicate a request for multimedia data to said server over a network, wherein said request comprises a client specific parameter and a selection of a region of said video signal to view on said client computer, and
wherein said server is further configured to:
encode said stream of multimedia data in response to said client specific parameter and based on said selection; and
stream said encoded stream to said client computer according to said client specific parameter.
13. The system of claim 12, wherein said server is configured to provide said streaming media data at a rate compatible with the processing capability of the client computer.
14. The system of claim 12, further comprising:
a second client computer, wherein said server is further configured to:
receive a second client specific parameter from a second client computer;
encode said stream of multimedia data in response to said second client specific parameter as a second encoded stream; and
stream said second encoded stream to said second client computer from said server computer via the network according to said client specific parameter.
15. The system of claim 14, wherein said server is configured to stream said second encoded stream to said second client computer while streaming said encoded stream to said first client computer at a different rate.
16. The method of claim 1, wherein encoding said stream of multimedia data based on the selected region of said video signal comprises zooming said video signal.
17. A method, implemented in hardware, of distributing media data to a client computer via a network from a server computer, the method comprising:
receiving a data request at a server computer from a client computer via a network;
receiving a client specific parameter from said client computer;
capturing a stream of multimedia data using at least one device configured to capture said stream of multimedia data;
encoding said stream of multimedia data in response to said client specific parameter;
streaming said encoded stream to said client computer from said server computer via the network according to said client specific parameter;
launching a delay monitoring module on said client computer;
detecting a change in received stream rate at said client computer and;
communicating a request to the server computer requesting an updated client specific parameter comprising a multimedia data stream rate based on the detected change.
US12/702,989 2000-08-29 2010-02-09 Method and Apparatus for Distributing Multimedia to Remote Clients Abandoned US20100138489A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/702,989 US20100138489A1 (en) 2000-08-29 2010-02-09 Method and Apparatus for Distributing Multimedia to Remote Clients

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/652,113 US7103668B1 (en) 2000-08-29 2000-08-29 Method and apparatus for distributing multimedia to remote clients
US11/515,255 US20070005690A1 (en) 2000-08-29 2006-09-01 Method and apparatus for distributing multimedia to remote clients
US12/702,989 US20100138489A1 (en) 2000-08-29 2010-02-09 Method and Apparatus for Distributing Multimedia to Remote Clients

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/515,255 Continuation US20070005690A1 (en) 2000-08-29 2006-09-01 Method and apparatus for distributing multimedia to remote clients

Publications (1)

Publication Number Publication Date
US20100138489A1 true US20100138489A1 (en) 2010-06-03

Family

ID=36939617

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/652,113 Expired - Fee Related US7103668B1 (en) 2000-08-29 2000-08-29 Method and apparatus for distributing multimedia to remote clients
US11/515,255 Abandoned US20070005690A1 (en) 2000-08-29 2006-09-01 Method and apparatus for distributing multimedia to remote clients
US12/702,989 Abandoned US20100138489A1 (en) 2000-08-29 2010-02-09 Method and Apparatus for Distributing Multimedia to Remote Clients

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US09/652,113 Expired - Fee Related US7103668B1 (en) 2000-08-29 2000-08-29 Method and apparatus for distributing multimedia to remote clients
US11/515,255 Abandoned US20070005690A1 (en) 2000-08-29 2006-09-01 Method and apparatus for distributing multimedia to remote clients

Country Status (1)

Country Link
US (3) US7103668B1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080189425A1 (en) * 2003-10-10 2008-08-07 Microsoft Corporation Media Stream Scheduling for Hiccup-Free Fast-Channel-Change in the Presence of Network Chokepoints
US20100201824A1 (en) * 2008-05-07 2010-08-12 Jeff Wei Systems and methods for comparing media signals
US20110299602A1 (en) * 2007-07-30 2011-12-08 Broadcom Corporation Monitor with conversion capability for portable media player
WO2011158179A1 (en) * 2010-06-14 2011-12-22 Ericsson Television Inc. Screen zoom feature for cable system subscribers
CN103222277A (en) * 2010-06-04 2013-07-24 三星电子株式会社 Method and apparatus for adaptive streaming based on plurality of elements for determining quality of content
CN104486436A (en) * 2014-12-22 2015-04-01 齐晓辰 Method and application system for monitoring hunting cameras on the basis of intelligent terminal
US9197689B2 (en) 2010-03-19 2015-11-24 Samsung Electronics Co., Ltd. Method and apparatus for adaptively streaming content including plurality of chapters
US9699486B2 (en) 2010-02-23 2017-07-04 Samsung Electronics Co., Ltd. Method and apparatus for transmitting and receiving data
US9756364B2 (en) 2009-12-07 2017-09-05 Samsung Electronics Co., Ltd. Streaming method and apparatus operating by inserting other content into main content
US9860573B2 (en) 2009-11-13 2018-01-02 Samsung Electronics Co., Ltd. Method and apparatus for providing and receiving data
US9967598B2 (en) 2009-11-13 2018-05-08 Samsung Electronics Co., Ltd. Adaptive streaming method and apparatus
US10425666B2 (en) 2009-11-13 2019-09-24 Samsung Electronics Co., Ltd. Method and apparatus for adaptive streaming using segmentation
USRE48360E1 (en) 2009-11-13 2020-12-15 Samsung Electronics Co., Ltd. Method and apparatus for providing trick play service

Families Citing this family (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7103668B1 (en) * 2000-08-29 2006-09-05 Inetcam, Inc. Method and apparatus for distributing multimedia to remote clients
FI20002124A (en) * 2000-09-27 2002-03-28 Nokia Corp Changing channel properties
US6985934B1 (en) 2000-10-23 2006-01-10 Binham Communications Corporation Method and system for providing rich media content over a computer network
CN1243442C (en) * 2000-12-15 2006-02-22 英国电讯有限公司 Transmission and reception of audio and/or video material
US6754642B2 (en) * 2001-05-31 2004-06-22 Contentguard Holdings, Inc. Method and apparatus for dynamically assigning usage rights to digital works
CA2809894C (en) 2001-06-27 2017-12-12 Skky Incorporated Improved media delivery platform
EP1429559B1 (en) * 2001-09-18 2017-04-12 Sony Corporation Transmission apparatus, transmission method, content delivery system, content delivery method, and program
US20070266413A1 (en) * 2001-12-20 2007-11-15 I-Requestv, Inc. Method and system for supplementing television programming with e-mailed magazines
US20070157233A1 (en) * 2001-12-20 2007-07-05 I-Requestv, Inc. Method and system for supplementing television programming with e-mailed magazines
US7373414B2 (en) * 2002-08-29 2008-05-13 Amx Llc Multi-media system and method for simultaneously delivering multi-media data to multiple destinations
US20040059836A1 (en) * 2002-09-23 2004-03-25 Peter Spaepen Method for generating and displaying a digital datafile containing video data
US7555540B2 (en) 2003-06-25 2009-06-30 Microsoft Corporation Media foundation media processor
US9412123B2 (en) 2003-07-01 2016-08-09 The 41St Parameter, Inc. Keystroke analysis
US20050198395A1 (en) * 2003-12-29 2005-09-08 Pradeep Verma Reusable compressed objects
KR100657065B1 (en) * 2004-01-29 2006-12-13 삼성전자주식회사 Device and method for character processing in wireless terminal
US10999298B2 (en) 2004-03-02 2021-05-04 The 41St Parameter, Inc. Method and system for identifying users and detecting fraud by use of the internet
US20050251832A1 (en) * 2004-03-09 2005-11-10 Chiueh Tzi-Cker Video acquisition and distribution over wireless networks
WO2005091135A2 (en) * 2004-03-19 2005-09-29 Koninklijke Philips Electronics N.V. Media signal processing method, corresponding system, and application thereof in a resource-scalable motion estimator
US9161068B2 (en) 2004-03-24 2015-10-13 Onstream Media Corporation Remotely accessed virtual recording room
US7720983B2 (en) * 2004-05-03 2010-05-18 Microsoft Corporation Fast startup for streaming media
US7649937B2 (en) * 2004-06-22 2010-01-19 Auction Management Solutions, Inc. Real-time and bandwidth efficient capture and delivery of live video to multiple destinations
JP2006013827A (en) * 2004-06-25 2006-01-12 Hitachi Communication Technologies Ltd Packet transfer apparatus
US20060036838A1 (en) * 2004-08-10 2006-02-16 Salcedo David M Security system network interface and security system incorporating the same
US7342584B2 (en) * 2004-11-18 2008-03-11 Amx, Llc Method and computer program for implementing interactive bargraphs of any shape or design on a graphical user interface
US9182993B2 (en) * 2005-03-18 2015-11-10 Broadcom Corporation Data and phase locking buffer design in a two-way handshake system
GB0517113D0 (en) 2005-08-20 2005-09-28 Ibm Methods, apparatus and computer programs for data communication efficiency
US11301585B2 (en) 2005-12-16 2022-04-12 The 41St Parameter, Inc. Methods and apparatus for securely displaying digital images
EP1985104A2 (en) * 2006-02-14 2008-10-29 Casio Computer Co., Ltd. Server apparatuses, server control programs, and client apparatuses in a computer system
JP4848801B2 (en) * 2006-03-09 2011-12-28 カシオ計算機株式会社 Screen display control device and screen display control processing program
EP1999719B1 (en) * 2006-03-29 2018-11-21 Casio Computer Co., Ltd. Server apparatus of computer system
US8151327B2 (en) * 2006-03-31 2012-04-03 The 41St Parameter, Inc. Systems and methods for detection of session tampering and fraud prevention
US20080034396A1 (en) * 2006-05-30 2008-02-07 Lev Zvi H System and method for video distribution and billing
JP4957126B2 (en) * 2006-08-31 2012-06-20 カシオ計算機株式会社 Client device and program
TWI323992B (en) * 2006-09-08 2010-04-21 Quanta Comp Inc Method for detecting status of host and/or i/o peripherals in certralized computer system
US20080247400A1 (en) * 2007-04-04 2008-10-09 Optimal Licensing Corporation System and method for increasing the efficiency in the delivery of media within a network
US7920747B2 (en) * 2007-05-09 2011-04-05 International Business Machines Corporation Pre-distribution image scaling for screen size
TW200924534A (en) * 2007-06-04 2009-06-01 Objectvideo Inc Intelligent video network protocol
US8065429B2 (en) * 2007-06-28 2011-11-22 Nokia Corporation System, apparatus and method for associating an anticipated success indication with data delivery
JP4983923B2 (en) * 2007-08-27 2012-07-25 富士通株式会社 Decoder device and decoding method
JP4725587B2 (en) * 2008-03-18 2011-07-13 カシオ計算機株式会社 Server apparatus and server control program
US8090822B2 (en) 2008-04-11 2012-01-03 The Nielsen Company (Us), Llc Methods and apparatus for nonintrusive monitoring of web browser usage
US7844705B2 (en) * 2008-05-30 2010-11-30 General Electric Company Networked image visualization image quality enhancement method and system
CN101365125B (en) * 2008-09-27 2012-03-21 腾讯科技(深圳)有限公司 Multipath video communication method and system
JP4697321B2 (en) * 2009-03-24 2011-06-08 カシオ計算機株式会社 Computer system, client device, and program
US9112850B1 (en) 2009-03-25 2015-08-18 The 41St Parameter, Inc. Systems and methods of sharing information through a tag-based consortium
CN101552913B (en) 2009-05-12 2011-07-06 腾讯科技(深圳)有限公司 Multi-channel video communication system and processing method
US20110208851A1 (en) * 2010-02-23 2011-08-25 Robin Frost System and method for data storage, such as discovery and marking ownership of network storage devices
US8390743B2 (en) 2011-03-31 2013-03-05 Intersil Americas Inc. System and methods for the synchronization and display of video input signals
US9131013B2 (en) 2011-07-29 2015-09-08 International Business Machines Corporation Tailoring content to be delivered to mobile device based upon features of mobile device
CN103037198A (en) * 2011-09-30 2013-04-10 捷达世软件(深圳)有限公司 System and method for monitoring videos
US9077947B2 (en) * 2011-11-09 2015-07-07 Microsoft Technology Licensing, Llc Presenting linear and nonlinear content via DVR
US10754913B2 (en) 2011-11-15 2020-08-25 Tapad, Inc. System and method for analyzing user device information
US9633201B1 (en) 2012-03-01 2017-04-25 The 41St Parameter, Inc. Methods and systems for fraud containment
US9521551B2 (en) 2012-03-22 2016-12-13 The 41St Parameter, Inc. Methods and systems for persistent cross-application mobile device identification
EP2880619A1 (en) 2012-08-02 2015-06-10 The 41st Parameter, Inc. Systems and methods for accessing records via derivative locators
WO2014078569A1 (en) 2012-11-14 2014-05-22 The 41St Parameter, Inc. Systems and methods of global identification
EP2837154B1 (en) * 2013-02-22 2018-11-14 Unify GmbH & Co. KG Method for controlling data streams of a virtual session with multiple participants, collaboration server, computer program, computer program product, and digital storage medium
EP2816761A1 (en) * 2013-06-17 2014-12-24 Thomson Licensing Wifi display compatible network gateway
US10902327B1 (en) 2013-08-30 2021-01-26 The 41St Parameter, Inc. System and method for device identification and uniqueness
US10091312B1 (en) 2014-10-14 2018-10-02 The 41St Parameter, Inc. Data structures for intelligently resolving deterministic and probabilistic device identifiers to device profiles and/or groups
CN108476345B (en) * 2015-11-02 2021-10-15 万特里克斯公司 Method and system for traffic rate adjustment in content control streaming media networks
EP3554238A4 (en) * 2016-12-15 2020-05-27 Ecolab USA Inc. Peroxyformic acid compositions for membrane filtration cleaning in energy services
EP3759980B1 (en) * 2018-02-27 2021-11-03 Nokia Solutions and Networks Oy Video on demand load equalization

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5953506A (en) * 1996-12-17 1999-09-14 Adaptive Media Technologies Method and apparatus that provides a scalable media delivery system
US6076109A (en) * 1996-04-10 2000-06-13 Lextron, Systems, Inc. Simplified-file hyper text protocol
US6091777A (en) * 1997-09-18 2000-07-18 Cubic Video Technologies, Inc. Continuously adaptive digital video compression system and method for a web streamer
US6345279B1 (en) * 1999-04-23 2002-02-05 International Business Machines Corporation Methods and apparatus for adapting multimedia content for client devices
US20020065925A1 (en) * 1999-09-18 2002-05-30 Jeremy A. Kenyon Dynamic scalable multi-media content streaming
US6449653B2 (en) * 1997-03-25 2002-09-10 Microsoft Corporation Interleaved multiple multimedia stream for synchronized transmission over a computer network
US6594699B1 (en) * 1997-10-10 2003-07-15 Kasenna, Inc. System for capability based multimedia streaming over a network
US6859557B1 (en) * 2000-07-07 2005-02-22 Microsoft Corp. System and method for selective decoding and decompression
US20050207495A1 (en) * 2004-03-10 2005-09-22 Jayaram Ramasastry Methods and apparatuses for compressing digital image data with motion prediction
US7103668B1 (en) * 2000-08-29 2006-09-05 Inetcam, Inc. Method and apparatus for distributing multimedia to remote clients
US7103099B1 (en) * 1998-07-27 2006-09-05 Microsoft Corporation Selective compression
US7339993B1 (en) * 1999-10-01 2008-03-04 Vidiator Enterprises Inc. Methods for transforming streaming video data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6560285B1 (en) * 1998-03-30 2003-05-06 Sarnoff Corporation Region-based information compaction as for digital images

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6076109A (en) * 1996-04-10 2000-06-13 Lextron, Systems, Inc. Simplified-file hyper text protocol
US5953506A (en) * 1996-12-17 1999-09-14 Adaptive Media Technologies Method and apparatus that provides a scalable media delivery system
US6449653B2 (en) * 1997-03-25 2002-09-10 Microsoft Corporation Interleaved multiple multimedia stream for synchronized transmission over a computer network
US6091777A (en) * 1997-09-18 2000-07-18 Cubic Video Technologies, Inc. Continuously adaptive digital video compression system and method for a web streamer
US6594699B1 (en) * 1997-10-10 2003-07-15 Kasenna, Inc. System for capability based multimedia streaming over a network
US7103099B1 (en) * 1998-07-27 2006-09-05 Microsoft Corporation Selective compression
US6345279B1 (en) * 1999-04-23 2002-02-05 International Business Machines Corporation Methods and apparatus for adapting multimedia content for client devices
US6973475B2 (en) * 1999-09-18 2005-12-06 Wildtangent Dynamic scalable multi-media content streaming
US20020065925A1 (en) * 1999-09-18 2002-05-30 Jeremy A. Kenyon Dynamic scalable multi-media content streaming
US7339993B1 (en) * 1999-10-01 2008-03-04 Vidiator Enterprises Inc. Methods for transforming streaming video data
US6859557B1 (en) * 2000-07-07 2005-02-22 Microsoft Corp. System and method for selective decoding and decompression
US7103668B1 (en) * 2000-08-29 2006-09-05 Inetcam, Inc. Method and apparatus for distributing multimedia to remote clients
US20050207495A1 (en) * 2004-03-10 2005-09-22 Jayaram Ramasastry Methods and apparatuses for compressing digital image data with motion prediction

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090013080A1 (en) * 2003-10-10 2009-01-08 Microsoft Corporation Media Stream Scheduling for Hiccup-Free Fast-Channel-Change in the Presence of Network Chokepoints
US9363302B2 (en) * 2003-10-10 2016-06-07 Microsoft Technology Licensing, Llc Media stream scheduling for hiccup-free fast-channel-change in the presence of network chokepoints
US20080189425A1 (en) * 2003-10-10 2008-08-07 Microsoft Corporation Media Stream Scheduling for Hiccup-Free Fast-Channel-Change in the Presence of Network Chokepoints
US8606951B2 (en) 2003-10-10 2013-12-10 Microsoft Corporation Media stream scheduling for hiccup-free fast-channel-change in the presence of network chokepoints
US20110299602A1 (en) * 2007-07-30 2011-12-08 Broadcom Corporation Monitor with conversion capability for portable media player
US9596498B2 (en) * 2007-07-30 2017-03-14 Broadcom Corporation Monitor with conversion capability for portable media player
US8780209B2 (en) * 2008-05-07 2014-07-15 Evertz Microsystems Ltd. Systems and methods for comparing media signals
US20100201824A1 (en) * 2008-05-07 2010-08-12 Jeff Wei Systems and methods for comparing media signals
US9967598B2 (en) 2009-11-13 2018-05-08 Samsung Electronics Co., Ltd. Adaptive streaming method and apparatus
US9860573B2 (en) 2009-11-13 2018-01-02 Samsung Electronics Co., Ltd. Method and apparatus for providing and receiving data
US10425666B2 (en) 2009-11-13 2019-09-24 Samsung Electronics Co., Ltd. Method and apparatus for adaptive streaming using segmentation
USRE48360E1 (en) 2009-11-13 2020-12-15 Samsung Electronics Co., Ltd. Method and apparatus for providing trick play service
US9756364B2 (en) 2009-12-07 2017-09-05 Samsung Electronics Co., Ltd. Streaming method and apparatus operating by inserting other content into main content
US9699486B2 (en) 2010-02-23 2017-07-04 Samsung Electronics Co., Ltd. Method and apparatus for transmitting and receiving data
US9197689B2 (en) 2010-03-19 2015-11-24 Samsung Electronics Co., Ltd. Method and apparatus for adaptively streaming content including plurality of chapters
US9277252B2 (en) 2010-06-04 2016-03-01 Samsung Electronics Co., Ltd. Method and apparatus for adaptive streaming based on plurality of elements for determining quality of content
CN103222277A (en) * 2010-06-04 2013-07-24 三星电子株式会社 Method and apparatus for adaptive streaming based on plurality of elements for determining quality of content
CN102939573A (en) * 2010-06-14 2013-02-20 爱立信电视公司 Screen zoom feature for cable system subscribers
WO2011158179A1 (en) * 2010-06-14 2011-12-22 Ericsson Television Inc. Screen zoom feature for cable system subscribers
CN104486436A (en) * 2014-12-22 2015-04-01 齐晓辰 Method and application system for monitoring hunting cameras on the basis of intelligent terminal

Also Published As

Publication number Publication date
US20070005690A1 (en) 2007-01-04
US7103668B1 (en) 2006-09-05

Similar Documents

Publication Publication Date Title
US7103668B1 (en) Method and apparatus for distributing multimedia to remote clients
US20050091311A1 (en) Method and apparatus for distributing multimedia to remote clients
KR100324974B1 (en) Internet convolution audio/video server
US6412004B1 (en) Metaserver for a multimedia distribution network
CN1843013B (en) Method for managing a streaming media service
KR100703399B1 (en) Transcoding apparatus and method for seamless video contents transmission
EP1438673B1 (en) System and method for communicating media signals
CA2365253C (en) System and method for providing internet broadcasting data based on hierarchical structure
US20040103438A1 (en) Methods and systems for transferring events including multimedia data
US20080201751A1 (en) Wireless Media Transmission Systems and Methods
US20030069963A1 (en) System and method of quality of service signaling between client and server devices
CA2768258C (en) Remote controlled studio camera system
US10516856B2 (en) Network video recorder cluster and method of operation
WO2005029864A1 (en) Method and apparatus for generating graphical and media displays at a thin client
KR20040031779A (en) Video stream switching
CN112035081A (en) Screen projection method and device, computer equipment and storage medium
US7834904B2 (en) Video surveillance system
CN112272326B (en) Distributed cross-domain large screen synchronization system
CN114389959B (en) Network congestion control method, device, electronic equipment and storage medium
KR20050085753A (en) Clipping of media data transmitted in a network
CA2603579A1 (en) Integrated wireless multimedia transmission system
Vogel et al. On QoS Negotiation in Distributed Multimedia Application
US20060098100A1 (en) Monitoring device
JPH11306116A (en) One-site multi-display system
Smith et al. JIWYNET

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION