WO1997017775A1 - Multimedia reception in a digital broadcasting system - Google Patents

Multimedia reception in a digital broadcasting system Download PDF

Info

Publication number
WO1997017775A1
WO1997017775A1 PCT/FI1996/000594 FI9600594W WO9717775A1 WO 1997017775 A1 WO1997017775 A1 WO 1997017775A1 FI 9600594 W FI9600594 W FI 9600594W WO 9717775 A1 WO9717775 A1 WO 9717775A1
Authority
WO
WIPO (PCT)
Prior art keywords
multimedia
programme
stream
scene
audio
Prior art date
Application number
PCT/FI1996/000594
Other languages
Finnish (fi)
French (fr)
Inventor
Ari Salomäki
Original Assignee
Oy Nokia Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oy Nokia Ab filed Critical Oy Nokia Ab
Priority to AU73011/96A priority Critical patent/AU7301196A/en
Publication of WO1997017775A1 publication Critical patent/WO1997017775A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/28Arrangements for simultaneous broadcast of plural pieces of information
    • H04H20/30Arrangements for simultaneous broadcast of plural pieces of information by a single channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/44Arrangements characterised by circuits or components specially adapted for broadcast
    • H04H20/46Arrangements characterised by circuits or components specially adapted for broadcast specially adapted for broadcast systems covered by groups H04H20/53-H04H20/95
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/65Arrangements characterised by transmission systems for broadcast
    • H04H20/71Wireless systems
    • H04H20/72Wireless systems of terrestrial networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/86Arrangements characterised by the broadcast information itself
    • H04H20/93Arrangements characterised by the broadcast information itself which locates resources of other pieces of information, e.g. URL [Uniform Resource Locator]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/23614Multiplexing of additional data and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4348Demultiplexing of additional data and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8106Monomedia components thereof involving special audio data, e.g. different tracks for different languages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8543Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/60Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
    • H04N5/602Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for digital sound signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H2201/00Aspects of broadcast communication
    • H04H2201/10Aspects of broadcast communication characterised by the type of broadcast system
    • H04H2201/20Aspects of broadcast communication characterised by the type of broadcast system digital audio broadcasting [DAB]

Definitions

  • the present invention relates to the transfer of a multimedia programme and in particular to the mechanisms used to initiate such transfer in a receiver in a digi ⁇ tal broadcasting system.
  • DAB Digital Audio Broadcasting
  • ETSI European Telecommunication Standards Institute
  • Fig. 1 contains all the services that are available to the user in a given frequency band. A change from one ensemble to another in the receiver is effected by tun ⁇ ing into a different frequency band, just as one changes channels in current FM radio reception.
  • the ensemble is divided into services, exemplified in Fig. 1 by Alpha Radio 1, Beta Radio and Alpha Radio 2.
  • there may be data services although these are not shown in the figure.
  • Each service is consists of one or more service components, and each of these is placed in a subchannel, which may be either an audio channel or a data channel.
  • FM radio contains only one service and one service compo- nent (audio) in each channel.
  • the transmission frame whose duration is either 24 ms or 96 ms depending on the DAB mode, consists of three chrono ⁇ logically consecutive parts.
  • the first part is a Syn ⁇ chronizing Channel, which contains no service informa- tion.
  • the next part is a Fast Information Channel FIC, which has a mode-specific fixed length.
  • the last part is a Main Service Channel MSC, which contains all the sub ⁇ channels.
  • the position, size and number of subchannels within the MSC may vary, but the size of the MSC is con- stant .
  • the MSC contains a maximum of 63 different audio and/or data subchannels.
  • the subchannels are numbered on the basis of a so-called Channel Id from 0 to 62. Moreo ⁇ ver, the MSC may contain an Auxiliary Information Chan ⁇ nel AIC, which has a fixed channel number 63. The AIC may carry the same type of information as the FIC.
  • the service supplier may also offer e.g. multime ⁇ dia services, hypermedia services, file-based services and hypertext.
  • the DAB operator gener ⁇ ates a DAB transmission signal, which comprises successive ⁇ sive transmission frames as shown in the lower part of Fig 1.
  • the information channel FIC and the MSC which contains the audio and data services, are separated from each other from the transmission frame.
  • the subchannels are separated, channel-decoded and then passed on for further processing.
  • the customer From -the FIC channel received, the customer obtains information about the services contained in the ensemble received and can thus select the service or services he/she wants.
  • By combin ⁇ ing subchannel service components in accordance with the application software it is possible to compose e.g. a desired multimedia service.
  • information can be transferred in packet mode, in which case data capacity can be reserved for service suppliers dynamically, or as a continuous stream.
  • the maximum capacity in packet transfer is 1.728 Mbit/s.
  • continuous audio transfer successive audio frames are transferred.
  • the information data is first placed in the data field of a so-called data group.
  • the data group contains header fields and after these a data field, in which the data to be transferred is placed.
  • the length of the data field may vary and is at most 8191 bytes.
  • the last field is the checksum of the data group.
  • the data packets to be sent out to the transmission path are formed from the data group by sim ⁇ ply chopping it into sections of equal length and plac ⁇ ing each section in the data field of a data packet. Fill bits are used if the last data group section to be placed is shorter than the length of the data field of the data packet.
  • the data packet length has one of four possible values, 24, 48, 72 or 96 bytes. Based on the packet headers, the data group can be assembled again in the receiver. Generally, a data group consists of the data fields of a number of packets transmitted in suc- cession, but in the simplest case a single packet is sufficient to form a data group.
  • a continuous audio stream or data stream is trans ⁇ ferred in frames, the structure of which is illustrated by Fig. 2.
  • audio samples coming in at a frequency of 48 kHz and encoded into 16-bit for ⁇ mat are divided into sub-bands, and the samples of the sub-bands are coded into the audio frame by making use of the masking effect of the human ear so that the in ⁇ coming bit rate 768 kbit/s is reduced e.g. in the case of a mono channel to a rate of about 100 kbit/s.
  • the four-byte header of the frame contains information in ⁇ tended for the decoders in the receiver, such as syn- chronisation data, bit rate data, and sampling frequency data.
  • a bit allocation field coming after the checksum indicates how the bits are allocated to each one of the audio field sub-bands containing 36 encoded samples and which bits have been removed from the samples in making use of the masking effect.
  • a scale factor selection in ⁇ formation field indicates how the group of audio samples has been scaled (normalized) in the decoder. After this there is a field that contains the audio bits proper. The information in it corresponds to 24 ms of audio. The field contains 36 encoded sub-band audio samples divided into twelve triplets, each of which contains 3 sub-band samples. Thus, four triplets corresponds to 12 ms of audio. After this there are fill bits if the number of audio bits amounts to less than the audio field length. Finally there are an X-PAD and an F-PAD field, the mean ⁇ ings of which are described next.
  • Fig. 3 presents the last part of the audio frame.
  • Each audio frame contains bytes that transmit data re ⁇ lating to the programme (Programme Associated Data) . This data is in synchronism with the audio data in the frame.
  • the PAD-bytes of successive frames make up a so- called PAD channel.
  • the field consisting of the two last bytes, called the fixed PAD field, is intended for the transmission of real-time information related to audio, but it can also be used as a very slow data channel.
  • the PAD channel can be extended by employing a so-called ex ⁇ tended PAD field X-PAD, which is intended for the trans- mission of additional information to the listener, such as text associated with the audio, e.g. the lyrics of songs.
  • the X-PAD field may be absent altogether, and its length in each frame can be four bytes, a so-called short X-PAD, in which case it is located in the frame area which is better protected against errors, indicated by the shading in the figure, or its length may change from frame to frame, in which case only a part (4 bytes) of it is in the well-protected area.
  • a Scale Factor Error Check - Cyclic Re- dundancy Check field ScF-CRC associated with the audio field.
  • the frame always has a fixed-length F-PAD field, and if an X-PAD field exists, its length is encoded in the F-PAD.
  • the X-PAD field has at its beginning one or more contents indicators CI .
  • the CI is a number which indicates the nature or application type of the data placed in the X-PAD data field or in its sub-fields. Ac ⁇ cording to the specification, the maximum number of ap ⁇ plication types available is 287. Numbers 0, 2-11, 32 and 33 are defined under item 7.4.3 of the specifica- tion.
  • the DAB system allows the transmission of multime ⁇ dia type services, but the multimedia techniques cur ⁇ rently used are not adequate for this purpose.
  • each multimedia producer uses its own technical so- lutions for digital audio and video, presentation script language, coding, protocols, operating systems, etc. So far there is no standardized method for generating a complex, interactive multimedia presentation using the producer's computer, storing it in a data medium, trans ⁇ mitting it over a transfer network and reproducing the presentation on another computer.
  • the producer has to store e.g. a multimedia book on a compact disc in numer- ous different formats, such as CD-I (Compact Disc Inter ⁇ active) , MPEG-1 and QuickTime (a system used by Apple) .
  • CD-I Compact Disc Inter ⁇ active
  • MPEG-1 Compact Disc Inter ⁇ active
  • QuickTime a system used by Apple
  • a recording in appropriate format is then transferred to an industry-standard computer PC, a Macintosh or a Unix computer, for these to be able to present the multimedia book. Transfer over a network or data exchange between heterogeneous systems is not possible.
  • the main problem is a lack of international standards for the creation and presentation of the contents of multimedia.
  • the final multimedia script lacks conditional links and spatial as well as temporal relationships be ⁇ tween content elements.
  • the JPEG and MPEG standards only describe the contents of information ob ⁇ jects, but they cannot be used to describe the relation ⁇ ships between the objects in a multimedia presentation.
  • MHEG Multimedia and Hypermedia Information Coding Ex- perts Group
  • the standard follows the layered structure of the OSI model, in which the abstract syntax and the transfer syntax are separated from each other.
  • the standard is based on an object-oriented approach. It has been developed in five parts, of which the first part, called ASN.l (Abstract Syntax Notation 1) , is a complete definition of objects, whereas the fifth part MHEG-5 describes the implementation at application level, with special focus on TV applications. ASN.l is also used in MHEG-5.
  • MHEG-5 is defined in the proposed standard ISO/IEC CD 13522-5, September 20, 1995.
  • An MHEG-5 application is composed of scenes and objects common to different scenes.
  • a scene is used to present information (text, audio, video, and so on) ' , whose be ⁇ haviour is based on the triggering of events. For exam ⁇ ple, pressing a button visible on the screen starts a video sequence or activates the sound. At least one scene is active at any given instant. Navigating within a presentation thus means moving from one scene to an ⁇ other.
  • Links are objects which contain a trigger for triggering an event and a reference to an action object, which again contains a list of elementary events.
  • a link is associated with a given event.
  • the event is triggered and the elemen ⁇ tary events (e.g. the starting, running and closing of a video sequence) are executed in the order prescribed by the list of elementary events.
  • a container created for the transfer contains a combination of MHEG objects, so it can be thought of as a complex object consisting of simple basic objects.
  • the container may contain e.g. JPEG, MPEG and text files. Containers can be linked to each other.
  • Running an MHEG application on-demand in a multime ⁇ dia receiver connected to a fixed network is basically quite simple.
  • the receiver first identifies the starting object in the received data, downloads it and prepares it.
  • the starting object may be any one of the objects in the container. Usually this is the first scene.
  • one or more linked objects are triggered, and this may result in the loading of several objects referenced by the elementary event. This is also the way an audio stream is started. The receiver naturally starts the reception of an audio stream right from the beginning.
  • the proposed MHEG multimedia is excellently suited for use in the DAB system.
  • the principle could be as illustrated by Fig. 4.
  • the service supplier encodes his multimedia service components, which may have a different internal format, to convert them into objects consistent with the MHEG specification.
  • the objects can be placed in containers.
  • the DAB operator places the containers or objects in a DAB multiplex and transmission frames to be transferred via a packet channel and/or as continuous audio and data.
  • the receiver decodes the sub-channels from the multiplex and passes the objects decoded according to the MHEG specification to an MHEG engine, which decodes the multimedia presentation from them.
  • the audio stream is di ⁇ vided into successive segments of different lengths and the segments are marked.
  • a specific marker is provided at the boundary between segments. Segment boundaries indicate a change in the multimedia presentation. The change may be e.g. the disappearance of a still picture.
  • a marker in the audio stream activates a link in a given scene, whereupon the presentation continues as programmed by the producer.
  • the decoder decodes the markers found in the audio stream and sends them to the MHEG engine.
  • the MHEG engine has received and decoded objects belonging to the presenta ⁇ tion and generated a scene which is not displayed.
  • the links in the scene are waiting to be activated, and when a marker associated with one of the links and acting as an excitation is decoded from the audio stream, it acti ⁇ vates the link concerned.
  • a marker associated with one of the links and acting as an excitation is decoded from the audio stream, it acti ⁇ vates the link concerned.
  • the link causes a transition to another scene, which is displayed.
  • the transition is invisible to the user, so the user perceives the presentation as starting with the right scene.
  • Fig. 1 presents the levels of abstraction in the DAB system
  • Fig. 2 presents an audio frame
  • Fig. 3 presents the PAD fields of the audio frame
  • Fig. 4 illustrates MHEG transmission
  • Fig. 5 presents the F-PAD field
  • Fig. 6a indicates how a marker is encoded in a short X-PAD field
  • Fig. 6b indicates how a marker is encoded in a vari ⁇ able-length X-PAD field.
  • the start ⁇ ing file is referred to as start-up file.
  • this start-up file is preferably notified to the receiver via the data transfer protocol for multimedia files by the method described in patent application FI 954752 by the applicant. Other methods can be used as long as the receiver is enabled to find and load the start-up file.
  • the start-up file contains a link which automatically starts the reception of the audio stream as well .
  • the start-up file contains a number of links associated with events that are awaiting to be triggered. When a trigger ap ⁇ pears, the link activates certain events as determined by the producer, so the result of these events has been accurately defined.
  • specific stream marker IDs are included in the audio stream.
  • the length of a stream marker ID is two bytes or preferably three bytes.
  • the stream marker IDs divide the audio stream into segments of varying lengths .
  • the service supplier places the stream marker IDs in the PAD fields of the audio frames in such a way that the segment boundaries are certain clearly distinguishable changes in the mul- timedia presentation, e.g. the audio stream portion be ⁇ tween any given stream marker IDs refers to a given scene and within this scene to a given still picture. In other words, as long as the still picture is visible, the sound decoded in the audio frames between the mark- ers is to be heard via the speakers of the receiver.
  • the MHEG engine of the receiver first finds the start-up file among the incoming files and decodes it as described in the above-mentioned patent application. It contains the mechanisms and references to the required files that are needed for the preparation of the first scene, which is then prepared by the MHEG engine.
  • the scene is associated with so-called presentables, which are objects that the user can see or hear. However, these presentables are not activated as yet and the scene is therefore not displayed on the screen of the receiver.
  • the start-up file also contains a command to start the reception of the audio stream associated with the multimedia, but in this case the reception begins in the middle of the audio stream.
  • the first scene contains links that are triggered by a certain marker embedded in the audio stream.
  • the receiver decodes the X-PAD fields of the audio frames and distinguishes the stream marker ID placed in the field and passes it to the MHEG engine.
  • the MHEG engine directs the marker to the links, with the result that the marker triggers the events defined in at least one of the links.
  • the events have been set in the MHEG language by the service supplier. They in ⁇ clude loading the objects determined by the service sup ⁇ plier into the receiver and preparing them.
  • the result is e.g. a new scene that the service supplier has meant to be displayed at this point of the audio stream. Its presentables are activated and the scene is displayed on the screen of the receiver.
  • the stream marker ID is trans ⁇ ferred in the X-PAD field of the audio frame.
  • a stream marker ID is placed at each boundary between audio seg ⁇ ments, in other words, the stream marker ID changes at each segment boundary.
  • the stream marker ID refers to the audio information in the frame concerned as well as the audio information in subsequent frames until the stream marker ID changes.
  • the procedure could be such that the receiver decodes an odd number of successive stream markers and the MHEG engine carries out a majority vote, the resulting stream marker ID being then used to acti ⁇ vate the link.
  • the stream marker ID can be encoded in the X-PAD field in the manner shown in Fig. 5, 6a and 6b.
  • Fig. 5 presents the F-PAD part defined by the specification that comes at the end of the audio frame. This part be ⁇ gins with a 2-bit field, F-PAD type. If it has the value "00”, this means according to the specification that the first two bits in the following 6-bit data field are re ⁇ served for an X-PAD indicator. If the indicator bits are "01”, this means that an X-PAD field is included and that it is a so-called short X-PAD comprising 4 bytes. If the bits are "10", this means that an X-PAD field is included and that the field is of variable length, called variable X-PAD. From the above information, the decoder detects the presence of an X-PAD field and also learns its type. It then examines the contents indicator CI of the X-PAD.
  • FIG. 6a there is first an 8-bit field which is reserved for an applica- tion type indicator.
  • this field is filled with the decimal number 1 (binary number 00000001) to indicate that the three data fields of the X-PAD contain a stream marker ID as used in the inven- tion.
  • the bit pattern of the stream marker ID can be se ⁇ lected by the service supplier. 24 bits provide a suffi ⁇ cient scope of variation.
  • contents indicator value 00000001 thus means that the next three bytes contain a stream marker ID.
  • the contents indi ⁇ cator is as illustrated by Fig. 6b.
  • Its length is two bytes and the Length field indicates the number of bytes included in the X-PAD.
  • the length in- dicator has the value "000”, this means that the X-PAD comprises four bytes.
  • the maximum is 48 bytes.
  • the next 5-bit field is reserved for the application type. According to the invention, this field is given the value of decimal 1 (binary num- ber 00001) .
  • the "application type external" field is not in use.
  • contents indi ⁇ cator value 000 00001 thus means that the X-PAD field is four bytes long and contains a stream marker ID and that the stream marker ID is given in the first three bytes.
  • the last byte is a checksum CRC with the polynome x 8 + x 4 + x 3 + x 2 +1 over the stream marker ID.

Abstract

To transfer a multimedia programme in a DAB system, an audio stream is divided into successive segments of variable length, which are marked by means of individual stream marker IDs placed at segment boundaries. The boundaries between segments indicate a change in the multimedia presentation. A scene in the multimedia programme, which, when active, is what the user sees on the display of the receiver, contains links that are waiting to be activated. When a marker associated with one of the links and acting as an excitation is decoded from the audio stream, it activates the link concerned. As a result, either the scene is displayed or the link causes a transition to another scene, which is displayed on the screen. The transition is invisible to the user, so the user perceives the presentation as starting with the right scene.

Description

Multimedia Reception in a Digital Broadcasting System
The present invention relates to the transfer of a multimedia programme and in particular to the mechanisms used to initiate such transfer in a receiver in a digi¬ tal broadcasting system.
In the Digital Audio Broadcasting (DAB) system, which has been developed to allow an efficient utiliza¬ tion of frequency bands, the transmission path is com- pletely digital. The system is designed to replace the analogue broadcasting system commonly used at present, which is based on the use of frequency modulation. DAB defines a digital radio channel based on multiple carri¬ ers which is applicable for the transmission of both audio and data services. In a completely digital trans¬ mission channel, it is possible to transmit a continuous data or audio stream, or the channel may be a packet channel. Packet transmission is more flexible and per¬ mits easier transmission of data units of a limited length. The DAB system is defined in ETSI (European Telecommunication Standards Institute) standard 300 401, February, 1995.
From the user's point of view, the highest level of abstraction in the DAB system is called ensemble, Fig. 1. It contains all the services that are available to the user in a given frequency band. A change from one ensemble to another in the receiver is effected by tun¬ ing into a different frequency band, just as one changes channels in current FM radio reception. The ensemble is divided into services, exemplified in Fig. 1 by Alpha Radio 1, Beta Radio and Alpha Radio 2. In addition, there may be data services, although these are not shown in the figure. Each service is consists of one or more service components, and each of these is placed in a subchannel, which may be either an audio channel or a data channel. For comparison, let it be stated that FM radio contains only one service and one service compo- nent (audio) in each channel. At the lowest level, the transmission frame, whose duration is either 24 ms or 96 ms depending on the DAB mode, consists of three chrono¬ logically consecutive parts. The first part is a Syn¬ chronizing Channel, which contains no service informa- tion. The next part is a Fast Information Channel FIC, which has a mode-specific fixed length. The last part is a Main Service Channel MSC, which contains all the sub¬ channels. The position, size and number of subchannels within the MSC may vary, but the size of the MSC is con- stant . The MSC contains a maximum of 63 different audio and/or data subchannels. The subchannels are numbered on the basis of a so-called Channel Id from 0 to 62. Moreo¬ ver, the MSC may contain an Auxiliary Information Chan¬ nel AIC, which has a fixed channel number 63. The AIC may carry the same type of information as the FIC.
At the transmitting end, in addition to audio serv¬ ices, the service supplier may also offer e.g. multime¬ dia services, hypermedia services, file-based services and hypertext. From the audio information and data pro- vided by the service suppliers, the DAB operator gener¬ ates a DAB transmission signal, which comprises succes¬ sive transmission frames as shown in the lower part of Fig 1.
In the receiver, the information channel FIC and the MSC, which contains the audio and data services, are separated from each other from the transmission frame. The subchannels are separated, channel-decoded and then passed on for further processing. From -the FIC channel received, the customer obtains information about the services contained in the ensemble received and can thus select the service or services he/she wants. By combin¬ ing subchannel service components in accordance with the application software, it is possible to compose e.g. a desired multimedia service.
As stated above, information can be transferred in packet mode, in which case data capacity can be reserved for service suppliers dynamically, or as a continuous stream. The maximum capacity in packet transfer is 1.728 Mbit/s. In continuous audio transfer, successive audio frames are transferred. Briefly speaking, to transfer information in data packets, the information data is first placed in the data field of a so-called data group. The data group contains header fields and after these a data field, in which the data to be transferred is placed. The length of the data field may vary and is at most 8191 bytes. The last field is the checksum of the data group. The data packets to be sent out to the transmission path are formed from the data group by sim¬ ply chopping it into sections of equal length and plac¬ ing each section in the data field of a data packet. Fill bits are used if the last data group section to be placed is shorter than the length of the data field of the data packet. The data packet length has one of four possible values, 24, 48, 72 or 96 bytes. Based on the packet headers, the data group can be assembled again in the receiver. Generally, a data group consists of the data fields of a number of packets transmitted in suc- cession, but in the simplest case a single packet is sufficient to form a data group.
A continuous audio stream or data stream is trans¬ ferred in frames, the structure of which is illustrated by Fig. 2. At the transmitting end, audio samples coming in at a frequency of 48 kHz and encoded into 16-bit for¬ mat are divided into sub-bands, and the samples of the sub-bands are coded into the audio frame by making use of the masking effect of the human ear so that the in¬ coming bit rate 768 kbit/s is reduced e.g. in the case of a mono channel to a rate of about 100 kbit/s. The four-byte header of the frame contains information in¬ tended for the decoders in the receiver, such as syn- chronisation data, bit rate data, and sampling frequency data. A bit allocation field coming after the checksum indicates how the bits are allocated to each one of the audio field sub-bands containing 36 encoded samples and which bits have been removed from the samples in making use of the masking effect. A scale factor selection in¬ formation field indicates how the group of audio samples has been scaled (normalized) in the decoder. After this there is a field that contains the audio bits proper. The information in it corresponds to 24 ms of audio. The field contains 36 encoded sub-band audio samples divided into twelve triplets, each of which contains 3 sub-band samples. Thus, four triplets corresponds to 12 ms of audio. After this there are fill bits if the number of audio bits amounts to less than the audio field length. Finally there are an X-PAD and an F-PAD field, the mean¬ ings of which are described next.
Fig. 3 presents the last part of the audio frame. Each audio frame contains bytes that transmit data re¬ lating to the programme (Programme Associated Data) . This data is in synchronism with the audio data in the frame. The PAD-bytes of successive frames make up a so- called PAD channel. The field consisting of the two last bytes, called the fixed PAD field, is intended for the transmission of real-time information related to audio, but it can also be used as a very slow data channel. The PAD channel can be extended by employing a so-called ex¬ tended PAD field X-PAD, which is intended for the trans- mission of additional information to the listener, such as text associated with the audio, e.g. the lyrics of songs. The X-PAD field may be absent altogether, and its length in each frame can be four bytes, a so-called short X-PAD, in which case it is located in the frame area which is better protected against errors, indicated by the shading in the figure, or its length may change from frame to frame, in which case only a part (4 bytes) of it is in the well-protected area. Between the PAD fields there is a Scale Factor Error Check - Cyclic Re- dundancy Check field ScF-CRC associated with the audio field. The frame always has a fixed-length F-PAD field, and if an X-PAD field exists, its length is encoded in the F-PAD. The X-PAD field has at its beginning one or more contents indicators CI . The CI is a number which indicates the nature or application type of the data placed in the X-PAD data field or in its sub-fields. Ac¬ cording to the specification, the maximum number of ap¬ plication types available is 287. Numbers 0, 2-11, 32 and 33 are defined under item 7.4.3 of the specifica- tion.
The DAB system allows the transmission of multime¬ dia type services, but the multimedia techniques cur¬ rently used are not adequate for this purpose. At pres¬ ent, each multimedia producer uses its own technical so- lutions for digital audio and video, presentation script language, coding, protocols, operating systems, etc. So far there is no standardized method for generating a complex, interactive multimedia presentation using the producer's computer, storing it in a data medium, trans¬ mitting it over a transfer network and reproducing the presentation on another computer. The producer has to store e.g. a multimedia book on a compact disc in numer- ous different formats, such as CD-I (Compact Disc Inter¬ active) , MPEG-1 and QuickTime (a system used by Apple) . A recording in appropriate format is then transferred to an industry-standard computer PC, a Macintosh or a Unix computer, for these to be able to present the multimedia book. Transfer over a network or data exchange between heterogeneous systems is not possible. The main problem is a lack of international standards for the creation and presentation of the contents of multimedia. In par¬ ticular, the final multimedia script lacks conditional links and spatial as well as temporal relationships be¬ tween content elements. For example, the JPEG and MPEG standards only describe the contents of information ob¬ jects, but they cannot be used to describe the relation¬ ships between the objects in a multimedia presentation. To solve this problem, in other words, to define and standardize the structural information of a multimedia presentation, the ISO (International Organization for Standardization) has established a working group called MHEG (Multimedia and Hypermedia Information Coding Ex- perts Group) , which has made a proposal for a multimedia standard, known by the same designation.
In its philosophy, the standard follows the layered structure of the OSI model, in which the abstract syntax and the transfer syntax are separated from each other. The standard is based on an object-oriented approach. It has been developed in five parts, of which the first part, called ASN.l (Abstract Syntax Notation 1) , is a complete definition of objects, whereas the fifth part MHEG-5 describes the implementation at application level, with special focus on TV applications. ASN.l is also used in MHEG-5. MHEG-5 is defined in the proposed standard ISO/IEC CD 13522-5, September 20, 1995. An MHEG-5 application is composed of scenes and objects common to different scenes. A scene is used to present information (text, audio, video, and so on)', whose be¬ haviour is based on the triggering of events. For exam¬ ple, pressing a button visible on the screen starts a video sequence or activates the sound. At least one scene is active at any given instant. Navigating within a presentation thus means moving from one scene to an¬ other.
To make the present invention easier to understand, certain MHEG concepts are now briefly described. Links are objects which contain a trigger for triggering an event and a reference to an action object, which again contains a list of elementary events. Thus, a link is associated with a given event. When a certain condition is encountered, the event is triggered and the elemen¬ tary events (e.g. the starting, running and closing of a video sequence) are executed in the order prescribed by the list of elementary events. A container created for the transfer contains a combination of MHEG objects, so it can be thought of as a complex object consisting of simple basic objects. The container may contain e.g. JPEG, MPEG and text files. Containers can be linked to each other. For the receiver to be able to present the received multimedia programme correctly, it must be pro- vided with a certain software package, called the MHEG engine. It is a process or a number of processes that are able to interpret the encoded MHEG objects in accor¬ dance with the specification. Running an MHEG application on-demand in a multime¬ dia receiver connected to a fixed network is basically quite simple. The receiver first identifies the starting object in the received data, downloads it and prepares it. The starting object may be any one of the objects in the container. Usually this is the first scene. After the starting object has been prepared, one or more linked objects are triggered, and this may result in the loading of several objects referenced by the elementary event. This is also the way an audio stream is started. The receiver naturally starts the reception of an audio stream right from the beginning.
The proposed MHEG multimedia is excellently suited for use in the DAB system. In this case, the principle could be as illustrated by Fig. 4. At the transmitting end, the service supplier encodes his multimedia service components, which may have a different internal format, to convert them into objects consistent with the MHEG specification. The objects can be placed in containers. The DAB operator places the containers or objects in a DAB multiplex and transmission frames to be transferred via a packet channel and/or as continuous audio and data. The receiver decodes the sub-channels from the multiplex and passes the objects decoded according to the MHEG specification to an MHEG engine, which decodes the multimedia presentation from them.
In the DAB system, however, the situation is more problematic as compared with a fixed network. A multime¬ dia presentation is very likely to start with audio. Shortly after the start of the audio stream comes a starting image, which may be a still picture. However, as DAB is a broadcasting system, the receiver is fre¬ quently switched to a multimedia service in the middle of a programme and therefore in the middle of an audio stream. The first part of the presentation is therefore not present in the memory of the receiver, so the start¬ ing image is missed and there are no mechanisms for in- voking starting it. The only alternative is to wait for a retransmission so that reception can be started from the very beginning. However, there should be a starting mechanism that would allow multimedia reception even af¬ ter transmission has already begun. This invention presents a solution to the problem described above. The solution is characterized by what is said in the independent claims.
According to the invention, the audio stream is di¬ vided into successive segments of different lengths and the segments are marked. For the marking, a specific marker is provided at the boundary between segments. Segment boundaries indicate a change in the multimedia presentation. The change may be e.g. the disappearance of a still picture. A portion of a multimedia presenta- tion that contains still pictures, video and text con¬ tains links. Besides the starting scene, such links are also present in other scenes. A marker in the audio stream activates a link in a given scene, whereupon the presentation continues as programmed by the producer. When the receiver is switched on, the decoder decodes the markers found in the audio stream and sends them to the MHEG engine. At the same time, the MHEG engine has received and decoded objects belonging to the presenta¬ tion and generated a scene which is not displayed. The links in the scene are waiting to be activated, and when a marker associated with one of the links and acting as an excitation is decoded from the audio stream, it acti¬ vates the link concerned. As a result, either the scene is displayed or the link causes a transition to another scene, which is displayed. The transition is invisible to the user, so the user perceives the presentation as starting with the right scene. In the following, the invention is described in greater detail by referring to the attached drawings, in which:
Fig. 1 presents the levels of abstraction in the DAB system;
Fig. 2 presents an audio frame;
Fig. 3 presents the PAD fields of the audio frame;
Fig. 4 illustrates MHEG transmission;
Fig. 5 presents the F-PAD field; Fig. 6a indicates how a marker is encoded in a short X-PAD field;
Fig. 6b indicates how a marker is encoded in a vari¬ able-length X-PAD field.
As is known, in a multimedia programme there must be some way to indicate the file that the receiver has to load first and from which the multimedia presentation is to be started. In the present application, the start¬ ing file is referred to as start-up file. In conjunction with the DAB system, this start-up file is preferably notified to the receiver via the data transfer protocol for multimedia files by the method described in patent application FI 954752 by the applicant. Other methods can be used as long as the receiver is enabled to find and load the start-up file. The start-up file contains a link which automatically starts the reception of the audio stream as well . First, according to the invention, the start-up file contains a number of links associated with events that are awaiting to be triggered. When a trigger ap¬ pears, the link activates certain events as determined by the producer, so the result of these events has been accurately defined.
Second, according to the invention, specific stream marker IDs are included in the audio stream. The length of a stream marker ID is two bytes or preferably three bytes. The stream marker IDs divide the audio stream into segments of varying lengths . The service supplier places the stream marker IDs in the PAD fields of the audio frames in such a way that the segment boundaries are certain clearly distinguishable changes in the mul- timedia presentation, e.g. the audio stream portion be¬ tween any given stream marker IDs refers to a given scene and within this scene to a given still picture. In other words, as long as the still picture is visible, the sound decoded in the audio frames between the mark- ers is to be heard via the speakers of the receiver.
Now, when the user switches the receiver to multi¬ media reception in the middle of a multimedia transmis¬ sion, the MHEG engine of the receiver first finds the start-up file among the incoming files and decodes it as described in the above-mentioned patent application. It contains the mechanisms and references to the required files that are needed for the preparation of the first scene, which is then prepared by the MHEG engine. The scene is associated with so-called presentables, which are objects that the user can see or hear. However, these presentables are not activated as yet and the scene is therefore not displayed on the screen of the receiver. The start-up file also contains a command to start the reception of the audio stream associated with the multimedia, but in this case the reception begins in the middle of the audio stream. The first scene contains links that are triggered by a certain marker embedded in the audio stream. The receiver decodes the X-PAD fields of the audio frames and distinguishes the stream marker ID placed in the field and passes it to the MHEG engine. The MHEG engine directs the marker to the links, with the result that the marker triggers the events defined in at least one of the links. The events have been set in the MHEG language by the service supplier. They in¬ clude loading the objects determined by the service sup¬ plier into the receiver and preparing them. The result is e.g. a new scene that the service supplier has meant to be displayed at this point of the audio stream. Its presentables are activated and the scene is displayed on the screen of the receiver. All the preceding actions are part of a chain of internal events in the programme that are not visible to the user. To the user's percep- tion, the multimedia starts at the right point in rela¬ tion to the audio. After this, the normal interactive procedure is followed. At this stage, objects that are no longer needed because the presentation has jumped from the starting scene directly to a later scene are cleared and removed.
As stated above, the stream marker ID is trans¬ ferred in the X-PAD field of the audio frame. A stream marker ID is placed at each boundary between audio seg¬ ments, in other words, the stream marker ID changes at each segment boundary. The stream marker ID refers to the audio information in the frame concerned as well as the audio information in subsequent frames until the stream marker ID changes. However, it is advantageous to place the same stream marker ID at regular intervals in other frames within the segment as well because this al¬ lows faster start-up of the MHEG presentation. It is not necessary to provide every frame with a stream marker ID. In this case the procedure could be such that the receiver decodes an odd number of successive stream markers and the MHEG engine carries out a majority vote, the resulting stream marker ID being then used to acti¬ vate the link. This provides an advantage when in the vicinity of a segment boundary, because it prevents "premature" progress in the multimedia presentation.
The stream marker ID can be encoded in the X-PAD field in the manner shown in Fig. 5, 6a and 6b. Fig. 5 presents the F-PAD part defined by the specification that comes at the end of the audio frame. This part be¬ gins with a 2-bit field, F-PAD type. If it has the value "00", this means according to the specification that the first two bits in the following 6-bit data field are re¬ served for an X-PAD indicator. If the indicator bits are "01", this means that an X-PAD field is included and that it is a so-called short X-PAD comprising 4 bytes. If the bits are "10", this means that an X-PAD field is included and that the field is of variable length, called variable X-PAD. From the above information, the decoder detects the presence of an X-PAD field and also learns its type. It then examines the contents indicator CI of the X-PAD.
In the case of a short X-PAD, Fig. 6a, there is first an 8-bit field which is reserved for an applica- tion type indicator. According to the invention, this field is filled with the decimal number 1 (binary number 00000001) to indicate that the three data fields of the X-PAD contain a stream marker ID as used in the inven- tion. The bit pattern of the stream marker ID can be se¬ lected by the service supplier. 24 bits provide a suffi¬ cient scope of variation. In the case of a short X-PAD, contents indicator value 00000001 thus means that the next three bytes contain a stream marker ID.
In the case of a variable X-PAD, the contents indi¬ cator is as illustrated by Fig. 6b. Its length is two bytes and the Length field indicates the number of bytes included in the X-PAD. In particular, if the length in- dicator has the value "000", this means that the X-PAD comprises four bytes. The maximum is 48 bytes. According to the specification, the next 5-bit field is reserved for the application type. According to the invention, this field is given the value of decimal 1 (binary num- ber 00001) . The "application type external" field is not in use. In the case of a variable X-PAD, contents indi¬ cator value 000 00001 thus means that the X-PAD field is four bytes long and contains a stream marker ID and that the stream marker ID is given in the first three bytes. The last byte is a checksum CRC with the polynome x8 + x4 + x3 + x2 +1 over the stream marker ID.
In the DAB specification, a meaning has already been defined for application type numbers 0, 2-11, 32 and 33, so the number of the application type referring to the stream marker ID must have a value other than those indicated above. The number 1 is still available and the applicant proposes that it be used for the pur¬ pose described in the present invention.
It is obvious to a person skilled in the art that technological development allows many different ways of implementing the basic idea of the invention. The inven¬ tion and its embodiments are therefore not limited to the examples described above but may be varied within the framework of the claims.

Claims

Claims
1. Multimedia programme which has been produced in a specific programming language that tells how monomedia programmes are spatially and temporally linked to each other and in which programme a monomedia stream contains stream marker IDs, the reception of each of which is an event that triggers functions defined in a given link, and in which multimedia programme a number of objects form a scene intended to be displayed on a display de¬ vice, characterized in that the monomedia stream is an audio stream and the stream marker IDs placed in it divide the audio stream into segments of variable length, each boundary between segments indicating a change in the multimedia presenta¬ tion, the objects forming a scene in the multimedia pres¬ entation have been provided with links whose triggering event is the reception of a stream marker ID associated with the link, so that the triggering results in a tran¬ sition from one scene to another in the multimedia pres¬ entation.
2. Multimedia programme as defined in claim 1, characterized in that the programming language is MHEG
(Multimedia and Hypermedia information coding Expert Group) .
3. Transfer of a multimedia programme in a DAB broadcasting system, in which the audio stream of the programme is transmitted in audio frames which have at their end a first field F-PAD of fixed length, which is intended for the transfer of data associated with the programme and contains data in- dicating whether a second field X-PAD for the transfer of data associated with the programme is present the at the end of the audio frame, said second field containing a contents indicator CI which indicates the nature of the data in the data field, the rest of the multimedia components are transmit¬ ted as files, and the receiver assembles from the re¬ ceived programme a scene to be displayed on a display device, characterized in that the audio stream is divided into segments of vari¬ able length by providing those audio frames in the audio stream which involve a change in the multimedia presen¬ tation with an individual stream marker ID, which is de- coded by the receiver and transferred to the software processing the multimedia presentation, the objects forming a scene in the multimedia pres¬ entation contain links, the triggering event of at least one of which is the transfer of said individual stream marker ID to the software, such triggering causing the software to perform certain specified actions.
4. Transfer of a multimedia programme as defined in claim 3, characterized in that the specified actions cause the scene to be changed into a scene corresponding to the current audio stream.
5. Transfer of a multimedia programme as defined in claim 4, characterized in that, when the reception of the multimedia programme is started in the middle of the transmission, the first scene displayed on the display device is the scene corresponding to the current audio stream.
6. Transfer of a multimedia programme as defined in claim 3, characterized in that audio frames within the segments also contain stream marker IDs, and that the stream marker ID at the beginning of the segment and those elsewhere in the segment refer to the same link.
7. Transfer of a multimedia programme as defined in claim 6, characterized in that, when the same segment contains several stream marker IDs, after removal of the CRC a majority vote is carried out and its result acts as a trigger that triggers the link.
8. Transfer of a multimedia programme as defined in claim 3, characterized in that the stream marker ID is placed in the second field X-PAD intended for the trans¬ fer of data associated with the programme.
9. Transfer of a multimedia programme as defined in claim 8, characterized in that the application type field included in the contents indicator CI contains an individual value indicating that the data field contains a stream marker ID and that the stream marker ID has been placed in the data field.
10. Transfer of a multimedia programme as defined in claim 3 or 9, characterized in that the stream marker
ID is a 3-byte number.
PCT/FI1996/000594 1995-11-07 1996-11-05 Multimedia reception in a digital broadcasting system WO1997017775A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU73011/96A AU7301196A (en) 1995-11-07 1996-11-05 Multimedia reception in a digital broadcasting system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI955356 1995-11-07
FI955356A FI99063C (en) 1995-11-07 1995-11-07 Multimedia reception in a digital broadcast radio system

Publications (1)

Publication Number Publication Date
WO1997017775A1 true WO1997017775A1 (en) 1997-05-15

Family

ID=8544342

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI1996/000594 WO1997017775A1 (en) 1995-11-07 1996-11-05 Multimedia reception in a digital broadcasting system

Country Status (3)

Country Link
AU (1) AU7301196A (en)
FI (1) FI99063C (en)
WO (1) WO1997017775A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0854649A2 (en) * 1997-01-16 1998-07-22 Digital Vision Laboratories Television broadcasting system and receiver
WO1999022563A2 (en) * 1997-10-30 1999-05-14 Koninklijke Philips Electronics N.V. Method for coding a presentation
EP1032146A2 (en) * 1999-02-24 2000-08-30 Sony Computer Entertainment Inc. Broadcast system and terminal for receiving and reproducing broadcast signals, comprising a download unit
DE10129120A1 (en) * 2001-03-21 2002-10-02 Artec Technologies Ag Recording, reproducing multimedia data involves generating stream of audio visual and/or multimedia data in computer; data stream is automatically stored in segments of adjustable length
US8327011B2 (en) 2000-09-12 2012-12-04 WAG Acquistion, LLC Streaming media buffering system
US8364839B2 (en) 2000-09-12 2013-01-29 Wag Acquisition, Llc Streaming media delivery system
US8595372B2 (en) 2000-09-12 2013-11-26 Wag Acquisition, Llc Streaming media buffering system
US8843586B2 (en) 2011-06-03 2014-09-23 Apple Inc. Playlists for real-time or near real-time streaming
US8856283B2 (en) 2011-06-03 2014-10-07 Apple Inc. Playlists for real-time or near real-time streaming
US8892691B2 (en) 2010-04-07 2014-11-18 Apple Inc. Real-time or near real-time streaming
US9558282B2 (en) 2008-12-31 2017-01-31 Apple Inc. Playlists for real-time or near real-time streaming
US9729830B2 (en) 2010-04-01 2017-08-08 Apple Inc. Real-time or near real-time streaming
US10044779B2 (en) 2010-04-01 2018-08-07 Apple Inc. Real-time or near real-time streaming

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2728089A1 (en) * 1994-12-13 1996-06-14 Korea Electronics Telecomm Multimedia source synchronisation for MHEG motor
EP0731575A2 (en) * 1995-03-09 1996-09-11 NOKIA TECHNOLOGY GmbH A method to generate and to transfer a hyper-text document and a hyper-media service to a mobile digital audio receiver

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2728089A1 (en) * 1994-12-13 1996-06-14 Korea Electronics Telecomm Multimedia source synchronisation for MHEG motor
EP0731575A2 (en) * 1995-03-09 1996-09-11 NOKIA TECHNOLOGY GmbH A method to generate and to transfer a hyper-text document and a hyper-media service to a mobile digital audio receiver

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0854649A3 (en) * 1997-01-16 2000-02-23 Digital Vision Laboratories Television broadcasting system and receiver
EP0854649A2 (en) * 1997-01-16 1998-07-22 Digital Vision Laboratories Television broadcasting system and receiver
WO1999022563A2 (en) * 1997-10-30 1999-05-14 Koninklijke Philips Electronics N.V. Method for coding a presentation
WO1999022563A3 (en) * 1997-10-30 1999-08-12 Koninkl Philips Electronics Nv Method for coding a presentation
EP1032146A3 (en) * 1999-02-24 2004-12-15 Sony Computer Entertainment Inc. Broadcast system and terminal for receiving and reproducing broadcast signals, comprising a download unit
EP1032146A2 (en) * 1999-02-24 2000-08-30 Sony Computer Entertainment Inc. Broadcast system and terminal for receiving and reproducing broadcast signals, comprising a download unit
US8327011B2 (en) 2000-09-12 2012-12-04 WAG Acquistion, LLC Streaming media buffering system
US9762636B2 (en) 2000-09-12 2017-09-12 Wag Acquisition, L.L.C. Streaming media delivery system
US8364839B2 (en) 2000-09-12 2013-01-29 Wag Acquisition, Llc Streaming media delivery system
US8595372B2 (en) 2000-09-12 2013-11-26 Wag Acquisition, Llc Streaming media buffering system
US10567453B2 (en) 2000-09-12 2020-02-18 Wag Acquisition, L.L.C. Streaming media delivery system
US10298638B2 (en) 2000-09-12 2019-05-21 Wag Acquisition, L.L.C. Streaming media delivery system
US10298639B2 (en) 2000-09-12 2019-05-21 Wag Acquisition, L.L.C. Streaming media delivery system
US9729594B2 (en) 2000-09-12 2017-08-08 Wag Acquisition, L.L.C. Streaming media delivery system
US9742824B2 (en) 2000-09-12 2017-08-22 Wag Acquisition, L.L.C. Streaming media delivery system
DE10129120B4 (en) * 2001-03-21 2006-08-31 Artec Technologies Ag Method and device for recording and reproducing multimedia data
DE10129120A1 (en) * 2001-03-21 2002-10-02 Artec Technologies Ag Recording, reproducing multimedia data involves generating stream of audio visual and/or multimedia data in computer; data stream is automatically stored in segments of adjustable length
US10977330B2 (en) 2008-12-31 2021-04-13 Apple Inc. Playlists for real-time or near real-time streaming
US9558282B2 (en) 2008-12-31 2017-01-31 Apple Inc. Playlists for real-time or near real-time streaming
US10044779B2 (en) 2010-04-01 2018-08-07 Apple Inc. Real-time or near real-time streaming
US9729830B2 (en) 2010-04-01 2017-08-08 Apple Inc. Real-time or near real-time streaming
US10693930B2 (en) 2010-04-01 2020-06-23 Apple Inc. Real-time or near real-time streaming
US11019309B2 (en) 2010-04-01 2021-05-25 Apple Inc. Real-time or near real-time streaming
US9531779B2 (en) 2010-04-07 2016-12-27 Apple Inc. Real-time or near real-time streaming
US8892691B2 (en) 2010-04-07 2014-11-18 Apple Inc. Real-time or near real-time streaming
US10523726B2 (en) 2010-04-07 2019-12-31 Apple Inc. Real-time or near real-time streaming
US9832245B2 (en) 2011-06-03 2017-11-28 Apple Inc. Playlists for real-time or near real-time streaming
US8856283B2 (en) 2011-06-03 2014-10-07 Apple Inc. Playlists for real-time or near real-time streaming
US8843586B2 (en) 2011-06-03 2014-09-23 Apple Inc. Playlists for real-time or near real-time streaming

Also Published As

Publication number Publication date
FI99063C (en) 1997-09-25
FI99063B (en) 1997-06-13
FI955356A0 (en) 1995-11-07
AU7301196A (en) 1997-05-29

Similar Documents

Publication Publication Date Title
EP1432158B1 (en) Receiver for a multipart service
US6654545B2 (en) Image signal and data storage medium implementing a display cycle identifier
JP3830507B2 (en) Method and apparatus for providing service selection in a multi-service communication system
KR100636198B1 (en) Data broadcasting content transmitting method, apparatus therefore, data broadcasting content receiving method, apparatus therefore
KR100641594B1 (en) Data transmission control method, data transmission method, data transmitter, and receiver
AU739958B2 (en) Information providing apparatus and method, information receiving apparatus and method, and transmission medium
US6415135B1 (en) Transmission protocol for file transfer in a DAB system
CA2192958C (en) A method for transmitting digital data and digital complementary data, and a method for playing back digital data and digital complementary data
WO1997017775A1 (en) Multimedia reception in a digital broadcasting system
EP1608093A1 (en) Method and apparatus for decoding MOT data
EP0872053A1 (en) Transmission of multimedia objects in a digital broadcasting system
KR19990023685A (en) Information providing apparatus and method, Information receiving apparatus and method, Information providing system and transmission medium
KR100439672B1 (en) Method and apparatus for the transmission of broadcasts
WO1997017776A1 (en) Transport of audio in a digital broadcasting system
JP3133958B2 (en) Digital signal transmission equipment
EP1631077A2 (en) Digital multimedia broadcast receiving apparatus and method thereof
US20020059572A1 (en) Network, transmitter terminal and method of forming an access point in a data stream
WO1997013337A1 (en) Transfer of a file group in a digital broadcasting system
JP2001358689A (en) Signal multiplexing device and method, and record medium
JPH11168437A (en) Digital data transmission and reception method in fm multiplex broadcasting and device using the same
JP2003198495A (en) Method and device for encoding data and method and device for decoding data
JP2002044614A (en) Epg system and receiver

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 97517888

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA