METHOD FOR SYNCHRONIZING EVENTS WITH STREAMING DATA
Field of the invention
The present invention relates generally to a method for synchronizing events on a computer with a stream of data being received by such computer from another computer through a network.
The resulting synchronization of streamed data and pre-programmed events can be used to trigger actions on the local computer based on the amount of data being received.
The most obvious use for the current invention is the interaction of streamed media (audio and video) on a webpage (or other internet enabled application) with client-based procedures. The value of this will be particularly appreciated by anyone familiar with Internet advertising or interactive narrative.
The invention enables the presentation of offers and interactivity in streamed content, matching it to specific points within the content.
Background of the invention
The transmission of files from one computer to another via a network has become commonplace in modern life. Anyone who uses a computer has, knowingly or unknowingly, requested a file from another computer or server. The most common method uses hyperlinks on a webpage.
By clicking on a hyperlink ("link") with a mouse, users effectively request the transmission of one or many data files their local computers. Until a few years ago, before streaming of files was available, the client computer could not start accessing data in the downloaded file until the entire file had been transferred. This caused large files, such as video or audio or to be impractical for most users, unless they were willing to wait 10 minutes or more after clicking on something on a screen before seeing the results?
Streaming files changed that by organizing data into a format that can be interpreted by the receiving machine as it arrives, in real time. Among other things, this permits video signals to be broadcast via a network, with the client machine rendering the media in real time as the data arrives, without having to wait for the entire file to arrive.
Many streaming formats have been developed by various consortiums and private entities, the most famous being part of the QuickTime (Apple), RealMedia (Real Networks) and Win Media (Microsoft) multimedia platforms.
The advent of streaming formats solved the problem of accessing large linear files via slower connections, making the distribution of video and music through the Internet a viable enterprise. Nevertheless, the nature of streaming files prevents the client computer from verifying the integrity of the data received, since it has to process it and move onto the next incoming package. If data is lost during transmission, it is lost, and any synchronicity between elements is lost with it. With streaming files, information can be expected to be lost.
Summary of the Invention
The present invention solves this problem by using 2 parallel connections: the data stream, and an additional connection which relays metadata about the stream, such as how much has been transmitted by the streaming server.
In essence, the current invention functions by calculating the sync points between the stream and the programmed events based on the data sent by the server, not the data received by the client. The invention utilizes two independent timelines:
The data stream; and An event-sync connection.
The data stream, which is decoded and rendered as it arrives. This "media timeline" is completely linear: information is displayed as data arrives; data is used to generate the media (audio and video, for example).
A separate event-sync connection is established for sync purposes. The information coming from this alternate connection is used on the client side to skip along the "events timeline". This second timeline is independent from the stream and non-linear, meaning that the system can access any event at any time. In other words, in addition to the stream of data, a parallel connection to the streaming server is used to report metadata (in the case of video: the amount of time, the frames per second, etc.). This metadata is used by the client-side application to trigger events based on what the server indicates it has streamed, not based on what the client has received in the stream.
In essence, a client side component performs two parallel tasks: it receives the media stream and renders it, while at the same time triggering events based on metadata received from the streaming server via a separate link.
The currently preferred embodiment uses Macromedia Flash for the client side component and Macromedia Flash Server for the server side.
A simple way to understand the invention is:
-at the exact moment when the stream starts arriving,
-the client computer sets a mark in time effectively starting a stopwatch,
-with that point in time as a starting line, the client component uses the other connection to figure out how much data the server has pushed
-this data is used to time the events.
Thus, for example, if the stream starts arriving at T1 , and the metadata indicates that 3 seconds of video have been delivered, then the client computer can accurately trigger events scheduled for T1 + 3 seconds.
This solves any inaccuracies that may be caused by dropped data, since the events are coordinated with the metadata and not the stream, and the metadata can be checked for errors.
Brief description of the drawings
The foregoing brief description, as well as further objects, features, and advantages of the present invention will be understood more completely from the following detailed description of a presently preferred, but nonetheless illustrative, embodiment with reference being had to the accompanying drawings, in which the only figure is a block diagram describing the exchange of data between the computers involved, where A is the computer client running a web browser, B is a web server and C is a Flash Content Server.
Detailed Description of the Preferred Embodiment
The present application describes the currently preferred embodiment of the invention, it is used for illustration purposes and is in no way the only manner in which to achieve the described results.
In order to further understand the invention, it is important to understand the concept of the timeline. Time is perceived by humans to move in a
linear, sequential manner. TO comes before T1 , which comes before T2, which comes before T3, etc. Video is presented to observers in the same way, the first frame precedes the second frame, which in turn precedes the third frame, etc. The sequence of frames, presented to a user in order is defined as a timeline: it is basically the linear arrangement of frames in order to represent passing time.
Table A shows the way in which REAL TIME and the VIDEO TIMELINE relate. On Table A we see a perfect match between elapsed time and presented frames. For every passing time unit the video timeline renders a unique and matching video frame. Since real time and the video timeline match, it would be possible to synchronize any event to the video timeline by using real time as a reference. Should one want to match any event to the image presented on frame 4, all that need done is to instruct the program to trigger the event on second 4 (T4). If this were a real world scenario, all that would need to synchronize events would be to trigger them based on elapsed real time.
Table A
But Table B shows a case in which time and the video timeline loose their correspondence. If frame 3 becomes delayed during transmission and arrives at the client computer a second later, any event synchronized to it will trigger early. The table shows no frame being rendered on T3, which causes a misstep and leaves the video timeline lagging with regards to elapsed time, placing event D on frame 3 instead of frame 4. It is clear that T4 now matches frame 3, therefore events synched to T4 will not take place in frame 4, but in frame 3.
Table B
This is the case when events are synched to real passing time or a clock. In the current scenario, the objective is to synchronize events to frames, not to ticking seconds. The problem is solved in Table C, where it can be seen that a skipped frame results in a skipped event, managing to save the relation between frames and events: event C matched frame 3. The fact that this takes place in T4 is irrelevant, since the objective is to match programmed events to video frames (or streamed data).
Table C
The way to match frames and events, allowing for data loss, is to match the events to the transmitted data, and not to elapsed time. This is achieved using a sync signal, a parallel connection between the client and the server that serves as a control stream, albeit an intermittent one.
The current embodiment of the invention used to synchronize streamed video and client based events is built using technologies available from Macromedia and Adobe, among others.
Macromedia Flash is used to program a client side module that requests and receives the stream of video while simultaneously connecting to the
Flash Content Server and triggering events based on the data from this intermittent connection. In order for this to work several steps need to take place.
Before the "video stream timeline" and "the event timeline" can be synchronized as they play, they need to be matched in authoring. This is done by using the video as a guide for building the events timeline. Since the events will be programmed using Flash in the current embodiment, we need to use a video format that is compatible with Flash. The FLV streaming format is used in this example, since it is the same as what will be streamed. To turn video into an FLV file, we use ADOBE AFTER EFFECTS. The video is imported into After Effects, where adjustments can be made to its size, frame rate, duration, quality, compression, etc. Once the video is of desired size and quality, the FLV file is generated.
The FLV file is imported into Flash, where a key step is that its video properties are matched in the Flash file to the ones in the FLV file. Frame rate must be the same on both, otherwise the procedure will not work.
Using the embedded video as a place-holder, we now develop and program any events and match them to the image. Synchronicity is not an issue, since all elements are on a single computer and a common program, so any event timed for frame 3 will always happen in frame 3. Once the programming is done and the events all take place in synchronism with their target frames, we delete the video and replace it with a remote call to a streaming file.
The resulting code will preferably look something like this (although alternative coding will be apparent to those skilled in the art):
_root.filename = "an1 ";(without the ". flv" extension) jOot.direccion = "rtmp://200.47.135.86/killbill/";(Flash Communication Server internet address)
_root.iniciaMovie = 2;
_root.bufferTime = 2;(buffering time)
_root.MovieFps = 6;(framerate)
_root.movieRunTime = 29;
We now need to place the .FLV streaming file in the correct directory. The final step is to establish a connection between the events played inside the flash file, and the video streamed from the server. This can be better understood by looking reference to the figure.
Figure 1 is a block diagram showing the computers involved and data transmitted between them, where block A is the client computer, running a web browser displaying an HTML document which holds a Flash file (swf). Block B is a web server and block C is a Flash Content Server.
The first thing that takes place, as represented by data flow 1 , is that the HTML document requests the SWF file from the web server. In data flow 2 the web server returns the SWF, which is executed and requests a connection to the Flash Content Server (FCS) in data flow 3. The connection is established via data flow 4, and through it the SWF requests the video stream as seen in data flow 5. Data flow 6 shows the data stream traveling from FCS to SWF, from the server to the client. The instant the stream starts arriving at the client the timeline is reset to T=O. The SWF and the FCS keep communicated via data flow 7 intermittently. Using TO as reference, along with the information in data flow 7 regarding the amount of data transmitted, the SWF file triggers events that are matched to their corresponding frame. In other words, the events timeline does not run linearly, but it jumps from one frame to another based on the following question: how much info has the FCS sent? Instead of: how much time has elapsed?
An exemplary embodiment of the present invention can be implemented using the following computer code:
Shoshmosis Complete code dynamic class shoshmosis extends MovieClip { function shoshmosis() { atachi = false; atach2 = false; timer = setlnterval(this, "syncronizo", 100); scope = this; detectarBW();
}
[lnspectable( type="String", defaultValue="rtmp://.../" )] public function set Sdireccion(direccion:String) { direccion = direccion; invalidate();
} public function get Sdireccion():String { return direccion;
}
[lnspectable( type="String", defaultValue="videos/becool/becool" )] public function set SfileName(fileName:String) { fileName = fileName; invalidate();
} public function get SfileName():String { return fileName;
}
[lnspectable( type="Number", defaultValue="4" )] public function set SiniciaMovie(iniciaMovie:Number) { iniciaMovie = iniciaMovie; invalidate();
} public function get SiniciaMovie():Number { return iniciaMovie;
}
[lnspectable( type="Number", defaultValue="5" )] public function set SbufferTime(bufferTime:Number) { bufferTime = bufferTime; invalidateQ;
} public function get SbufferTime():Number { return bufferTime;
}
[lnspectable( type="String", defaultValue="12" )] public function set SMovieFps(MovieFps:String) { MovieFps = MovieFps; invalidate();
} public function get SpausePlayX():String { return pausePlayX;
}
[lnspectable( type="String", defaultValue="650" )] public function set SpausePlayX(pausePlayX:String) { pausePlayX = pausePlayX; invalidate();
} public function get SpausePlayY():String { return pausePlayY;
}
[lnspectable( type="String", defaultValue="335" )] public function set SpausePlayY(pausePlayY:String) { pausePlayY = pausePlayY; invalidateQ;
} public function get SrestopX():String { return restopX;
}
[lnspectable( type="String", defaultValue="700" )]
public function set SrestopX(restopX:String) { restopX = restopX; invalidate();
} public function get SrestopY():String { return restopY;
}
[lnspectable( type="String", defaultValue="335" )] public function set SrestopY(restopY: String) { restopY = restopY; invalidate();
} public function get SMovieFps():String { return MovieFps;
}
[lnspectable( type="String", defaultValue="51.8" )] public function set SmovieRunTime(movieRunTime:String) { movieRunTime = movieRunTime; invalidate();
} public function get SmovieRunTime():String { return movieRunTime;
}
[lnspectable( type="Boolean", defaultValue="true" )] public function set SpausePlay(pausePlay:Boolean) { pausePlay = pausePlay; invalidate();
} public function get SpausePlay():Boolean { return pausePlay;
}
[lnspectable( type="Boolean", defaultValue="true" )] public function set Sireplay(sReplay:Boolean) { sReplay = sReplay; invalidate^);
} public function get Sireplay():Boolean { return sReplay;
}
[lnspectable( type="Boolean", defaultValue="true" )] public function set SocultaVideo(ocultaVideo:Boolean) { ocultaVideo = ocultaVideo; invalidate();
} public function get SocultaVideo():Boolean { return ocultaVideo;
} function detectarBWQ { bnc = new NetConnection(); sco = this;
bnc.onStatus = function(info) { traceC'Level: "+info.level+" Code: "+info.code); if (info.code == "NetConnection.Connect.Success") { trace("— connected to: "+this.uri); }
};
NetConnection. prototype. onBWDone = function(p_bw) { tracefonBWDone: "+p_bw); bnc.closeO;
NetConnection. prototype. onBWDone = null; if (p_bw<=320) {
_root.calidad = "048";
} if (p_bw>=321 && p_bw<=504) { _root.calidad = "300";
} if (p_bw>=505) {
_root.calidad = "512";
} scope. SfileName = scope. SfileName.concat(_root.calidad); trace("++++"+scope.SfileName); scope. conectar();
};
NetConnection. prototype. onBWCheck = function() { return ++counter;
// Serverside, just ignore any return value and return the call count
}; bnc.connect("rtmp://cp13971.edgefcs.net/ondemand", true);
} function conectar() { myCon = new NetConnection(); myCon.onStatus = function(obj) { if (obj.code == "NetConnection.Connect.Success") { scope. streamQ;
}
}; trace(scope.Sdireccion) myCon.connect(scope.Sdireccion);
} function streamQ { scope. myNetStream = new NetStream(myCon); scope. vid_mc.attachVideo(scope. myNetStream); scope. myNetStream. setBufferTime(scope.SbufferTime); scope. myNetStream. onStatus = function(obj) { if (obj.code == "NetStream.Play.Start") { }
}; trace(scope. SfileName) scope. myNetStream. play(scope. SfileName);
_root.SH_JS_START = "OK";
} function syncronizo() { frameFoward = int(scope.myNetStream.time*scope.SMovieFps); scope._parent.gotoAndStop(scope.SiniciaMovie+frameFoward); if (scope. SpausePlay) { if (scope. myNetStream.time>0 && !atach1 ) { thisBug = scope._parent.attachMovie("pause_play", "pause_play_mc", scope._parent.getNextHighestDepth(), (_x:SpausePlayX, _y:SpausePlayY}); thisBug3 = scope^parent.attachMovieC'restop", "restop_mc", scope._parent.getNextHighestDepth(), {_x:SrestopX, _y;SrestopY}); atachi = true; } } if (scope. myNetStream.time>scope.SmovieRunTime) { _root.SH_JS_END = "true"; _root.SH_JS_FINISH = "OK"; scope. myNetStream.close(scope.SfileName); clearlnterval(timer); thisBug.removeMovieClipO; if (!atach2) { if (scope. Sireplay) { thisBug2 = scope.jDarent.attachMovieC'replay", "replay_mc", scope._parent.getNextHighestDepth(), {_x:SpausePlayX, _y:SpausePlayY}); atach2 = true; } else if (scope. SocultaVideo) { scope._parent.gotoAndStop(SiniciaMovie-1 );
} atach2 = true;
}
} } function pause_play() { scope. myNetStream.pause(scope.SfileName);
} function replayQ { thisBug3.removeMovieClip(); thisBug2.removeMovieClip(); scope._parent.gotoAndPlay(SiniciaMovie-1 );
} function restopQ { thisBug3.removeMovieClip(); thisBug.removeMovieClipO; thisBug2.removeMovieClip(); scope._parent.gotoAndStop(SiniciaMovie-1 );
//el path que sigue es solo para este caso. Modificar segύn corresponda
_root.sonido.snd.setVolume(0); _root.SH_JS_CLOSE = "OK"; } }
This function enables the synchronization process function syncronizo() { get the video streaming time elapsed get the flash movie frames per seconds calculate the exact next posicion of the playhead in the current timeline frameFoward = int(scope.myNetStream.time*scope.SMovieFps); move the playhead(in the user's timeline) to the accurate frame ("synchronized" whith the playhead in the video streaming, server side) scope._parent.gotoAndStop(scope.SiniciaMovie+frameFoward); handle the pause and stop process. if (scope. SpausePlay) { if (scope. myNetStream.time>0 && !atach1 ) { thisBug = scope._parent.attachMovie("pause_play", "pause_play_mc", scope._parent.getNextHighestDepth(), {_x:SpausePlayX, _y:SpausePlayY}); thisBug3 = scope.jDarent.attachMovieC'restop", "restopjmc", scope._parent.getNextHighestDepth(), {_x:SrestopX, _y:SrestopY}); atacM = true;
} } Control the video's "end of file" if (scope. myNetStream.time>scope.SmovieRunTime) { _root.SH_JS_END = "true"; _root.SH_JS_FINISH = "OK"; scope. myNetStream.close(scope.SfileName); clearlnterval(timer); thisBug.removeMovieClipO; if (!atach2) {
Control the replay process if (scope. Sireplay) { thisBug2 = scope^parent.attachMovieC'replay", "replay_mc", scope._parent.getNextHighestDepth(), (_x:SpausePlayX, _y:SpausePlayY}); atach2 = true;
Control the hide Video Process } else if (scope. SocultaVideo) { scope._parent.gotoAndStop(SiniciaMovie-1 );
} atach2 = true;
}
}
While the invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.