US20040234250A1 - Method and apparatus for performing an audiovisual work using synchronized speech recognition data - Google Patents

Method and apparatus for performing an audiovisual work using synchronized speech recognition data Download PDF

Info

Publication number
US20040234250A1
US20040234250A1 US10/797,576 US79757604A US2004234250A1 US 20040234250 A1 US20040234250 A1 US 20040234250A1 US 79757604 A US79757604 A US 79757604A US 2004234250 A1 US2004234250 A1 US 2004234250A1
Authority
US
United States
Prior art keywords
basic units
time codes
audio signal
audio
providing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/797,576
Inventor
Jocelyne Cote
Howard Ryshpan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ryshco Media Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US10/797,576 priority Critical patent/US20040234250A1/en
Assigned to RYSHCO MEDIA INC. reassignment RYSHCO MEDIA INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: COTE, JOCELYNE, RYSHPAN, HOWARD
Publication of US20040234250A1 publication Critical patent/US20040234250A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel

Definitions

  • This invention pertains to the field of what is commonly referred to as speech recognition. More precisely, this invention provides a method and an apparatus for performing an audiovisual work using synchronized recognition data.
  • An audible beep is sent as a signal to signify the beginning of the line to be re-recorded.
  • a visual indicator called a wipe, is superimposed on the screen as a visual indication of when to begin and stop.
  • a series of takes are recorded, sometimes as many as 24, and are given to the editor in order to verify by eye or by trying to match the sound waves of the original production take with that of the newly recorded ones.
  • Most of the editing is, in the end, totally dependent on the experienced eye and ear of the human operators.
  • the method used for film dubbing in the greater part of the world is the same, except in the United States where the voice of the translator is fed into one of the earphones while the other carries the mixed track of dialog from the original language.
  • the norm for the recording of dialog using this method is between ten to twelve lines of text per hour of studio time.
  • the system used in France, Quebec, and South Africa consists in taking the film that is to be post-synchronized (or dubbed) and transferring it to either a three quarter inch or a half inch video tape.
  • the video is fed from a VCR to a special machine, called a detection machine, that links a roll of white 35 mm leader film with the VCR so that they run synchronously with each other.
  • a detection of the scene cuts, and all the lip movements and dialog is then performed of the original language.
  • a highly skilled craftsperson, called a detector then proceeds to write with a pencil, on the strip of white leader.
  • the detector copies the original language of the film dialog, following the precise movements of the lips and matches them to the spoken word.
  • a feature length film i.e. a film ranging from 90 to 100 minutes in running time. Since these operations are dependent upon a number of hands, they are open to errors and inaccuracies in the detection process and the calligraphy.
  • an editor works on the dialog tracks, adjusting the synchronization. When that is completed to everyone's satisfaction, a final mix of the tracks is done, and the script is re-conformed and is tabled for distribution.
  • the U.S. Pat. No. 5,732,184 teaches a system for the editing of video and audio sequences, and relates only to a system for editing video clips, or small portions of video, and sound clips based on short sections of sound waves displayed on a video screen.
  • the cursor is able to display no more than three frames of video and sound at the same time in one direction or the other. The cursor then becomes an aid to identifying the material only.
  • the international publication WO98/101860 provides a fairly simple device that attempts to use a computerized calligraphy of the dialogs. Its primary market is actually the home-entertainment or classroom games market. This device allows the player to substitute their voice for the one on the screen, using a basic recording device.
  • the “beep-and-wipe” system (in ADR, or Automatic Dialog Replacement) that is currently used throughout the world, is a system that is learned by performers, who then must develop proficiency for it. Otherwise, it becomes rather tedious, frustrating, and time consuming. Actors must do it instinctively, i.e. they must learn to anticipate when to begin taking into account the fact that it takes the human brain ⁇ fraction (1/20) ⁇ th of a second to decode what the eyes have seen and then, the time it takes for the actor to respond to what he or she has just seen would put the synchronization out approximately 11 ⁇ 2 frames. The amount of text that can be said by the actor is limited in terms of time because it is based on the individual actor's retentive powers.
  • Spot editing is the editing in the studio by the studio engineer, who tries to match or tailor the waveforms of the original dialog with the newly recorded one. While some spot editing can be done in studio by trying to match waveforms, the drawbacks to this are that it requires some training and knowledge in the ability to read the waveforms so as to be able to properly match them, and also if there is too much variation in the tailoring of the waveforms, it will ultimately cause a distortion in the sound.
  • Yet another object of the invention is to provide a method and apparatus for achieving post-production synchronization for film and video without repeatedly moving backward in time.
  • Yet another object of the invention is to assist adaptation of an audiovisual work.
  • Yet another object of the invention is to assist closed-caption generation.
  • a method for producing an audiovisual work comprising the steps of providing an audio signal to a speech recognition module, performing a speech recognition of said audio signal, the speech recognition comprising an extracting of a plurality of basic units of recognized speech and related time codes, receiving the plurality of basic units of recognized speech and the related time codes from the speech recognition module, processing the received plurality of basic units to provide synchronization information for a production of said audiovisual work, and displaying on a user interface said synchronization information.
  • a method for performing closed-captioning of an audio source comprising the steps of providing an audio signal of an audio/video signal to a speech recognition module, performing a speech recognition of said audio/video signal, and incorporating text of said recognized speech of the audio signal as closed-captioning into a visual or non-visual portion of the audio/video signal in synchronization.
  • an apparatus for producing an audiovisual work comprising a speech recognition module receiving an audio signal and providing a plurality of basic units of recognized speech and related time codes, means for processing the plurality of basic units to provide synchronization information for a production of said audiovisual work; and means for displaying on a user interface said synchronization information.
  • an apparatus for performing closed-captioning of an audio source comprising a speech recognition module receiving an audio signal and providing recognized speech, incorporating means for incorporating text of said recognized speech of the audio signal as closed-captioning into a visual or non-visual portion of the audio/video signal in synchronization.
  • a “rythmo band” is a clear band of 35 mm which is written in India ink by a calligrapher and is projected in the recording studio by means of a specifically designed overhead projector and is run locked in synchronization with a projected image.
  • the “rythmo band” comprises the script and the translated script in the case of film dubbing.
  • FIG. 1 is a flow chart of the preferred embodiment of the invention
  • FIG. 2 is a block diagram of one embodiment of the invention.
  • the apparatus comprises a video source, a conformed text source, a phoneme to grapheme unit, a phoneme recognition module, a post-production sound recording synchguide, a new sound source, a project database;
  • FIG. 3 is a flow chart of one embodiment of the invention.
  • FIG. 4 is a flow chart which shows how a project environment is set
  • FIG. 5 is a flow chart which shows how the script is prepared
  • FIG. 6 is a flow chart which shows how the synchguide is prepared
  • FIG. 7 is a flow chart which shows how the synchguide may be changed
  • FIG. 8 is screenshot which shows the user interface in one embodiment of the invention.
  • FIG. 9 is a block diagram of another embodiment of the invention.
  • the apparatus comprises a video source, a conformed text source, a phoneme recognition module; a post-production sound recording synchguide, a new sound source, a project database;
  • FIG. 10 is a block diagram of another embodiment of the invention for assisting closed-caption generation; in this embodiment, the apparatus comprises an audio video source, a speech recognition module and a closed-caption editor;
  • FIG. 11 is a flow chart which shows how the assisting closed-caption generation is performed
  • FIG. 12 is a block diagram of another embodiment of the invention for assisting animation production;
  • the apparatus comprises a speech recognition module, a computerized animation assistant, a visem database, a storyboard database;
  • FIG. 13 is a flow chart which shows how the assisting animation production is performed
  • FIG. 14 is a block diagram which shows another embodiment of the invention for assisting adaptation of an audiovisual work
  • FIG. 15 is a flow chart which shows another embodiment of the invention for assisting adaptation of an audiovisual work
  • FIG. 16 is a block diagram which shows another embodiment of the invention for assisting Karaoke generation.
  • FIG. 17 is a flow chart which shows another embodiment of the invention for assisting Karaoke generation.
  • FIG. 1 there is shown the preferred embodiment of the invention.
  • an audio signal is provided.
  • the audio signal comprises at least speech data.
  • the audio signal may further comprise Foley.
  • the audio signal originates from an audio signal source.
  • step 4 speech recognition is performed using the audio signal provided by the audio signal source.
  • the speech recognition provides an element indicative of a speech source.
  • step 6 basic units of recognized speech are extracted with related time codes.
  • the basic units are phonemes of the recognized speech.
  • each phoneme is provided with two related time codes indicative of a beginning of the phoneme and of a end of the phoneme with respect to a common time origin.
  • the recognized phonemes and the related time codes are aligned with data in an user interface.
  • the user interface comprises at least a time scale to which the recognized phonemes are aligned to.
  • the user interface may comprise various other information depending on a targeted use.
  • FIG. 2 there is shown one specific embodiment of the invention. This specific embodiment of the invention is intended to assist post-production operations.
  • the invention comprises a video source 10 , a display 12 , a conformed text source 14 , a phoneme to grapheme unit 16 , a phoneme recognition module 18 , a word to phoneme database 20 , a phoneme to grapheme database 21 , a post-production sound recording synchguide 22 , a video destination 24 , a new sound source 26 and a project database 28 .
  • the video source 10 provides a video stream to the display and to the post-production sound recording synchguide 22 .
  • the video source 10 also provides an audio source to the post-production sound editor 22 and to the phoneme recognition module 18 .
  • the video source 10 provides time codes to the conformed text source 14 . In this embodiment of the invention, the time codes provided have a common time origin.
  • the conformed text source 14 is controlled by the post-production sound recording synchguide 22 .
  • the conformed text source 14 provides conformed text and time codes to the phoneme recognition module 18 and to the phoneme to grapheme unit 16 .
  • the phoneme recognition module 18 is a standard voice recognition module that provides phonemes as well as time codes. Someone skilled in the Art of voice recognition will have sufficient information to select an appropriate phoneme recognition module.
  • the phoneme to grapheme unit 16 receives the phonemes and the time codes from the phoneme recognition module 18 .
  • the phoneme recognition module 18 is controlled by the post-production sound recording synchguide 22 .
  • each phoneme provided by phoneme recognition module 18 has two time codes. One of the two time codes is dedicated to the beginning of the phoneme; the other of the two time codes is dedicated to the end of the phoneme.
  • the phoneme to grapheme unit 16 provides at least the graphemes with the time codes to the post-production sound recording synchguide 22 .
  • Two time codes are dedicated for each grapheme in the preferred embodiment of the invention. The first time code of the two time codes is dedicated to the beginning of the grapheme, while the second time code of the two time codes is dedicated to the end of the grapheme.
  • the post-production sound recording synchguide 22 receives the graphemes with the time codes from the phoneme to grapheme unit 16 , a new audio source from the new audio source 26 and provides the results to the audio destination 24 .
  • the post-production sound recording synchguide 22 is connected to the project database 28 .
  • the phoneme to grapheme unit 16 is connected to the word to phoneme database 20 and to the phoneme to grapheme database 21 .
  • the word to phoneme database 20 comprises a list of words with their corresponding phonemes.
  • the phoneme to grapheme database 21 comprises a list of words where the phonemes of each word are mapped to the corresponding graphemes.
  • the project database 28 comprises data about the project as explained below.
  • FIG. 3 there is shown another embodiment of the invention.
  • the project environment is set.
  • the script is then prepared according to step 32 of FIG. 3.
  • the synchguide is prepared according to step 34 of FIG. 3.
  • the synchguide is modified.
  • information related to the project is generated.
  • step 40 the global parameters for the project are entered.
  • the global parameters comprise, but are not limited to, the name of the project, the type of project, the identity as well as the access login and password of the persons allowed to work on the project, etc.
  • the project parameters are entered.
  • the project parameters comprise, and are not limited to, the name and the location of the video source, the name and the location of the audio source.
  • the global parameters and the project parameters are provided to the post-production sound recording synchguide 22 and stored in the project database 28 .
  • the project is planned.
  • the step of planning the project comprises the step of assigning a time schedule to the persons allowed to work on the project.
  • step 48 the script is conformed.
  • step 50 the script is formatted.
  • step 52 a part of the formatted script is selected. The part of the formatted script is selected from the conformed text source 14 using the control of the post-production sound recording synchguide 22 .
  • the script is provided to the phoneme recognition module 18 .
  • phonemes are generated by the phoneme recognition module 18 using at least the provided script and time codes.
  • graphemes are generated using the phoneme to grapheme unit 16 , the word to phoneme database 20 and the phoneme to grapheme database 21 . In the preferred embodiment, graphemes are generated with their related time codes. More precisely, the phoneme to grapheme unit 16 receives a word from the conformed text source 14 ; the phonemes of the words provided by the conformed text source 14 are then found using the word to phoneme database 21 .
  • the phoneme to grapheme unit 16 also receives the phonemes as well as time codes from the phoneme recognition module 18 . A match is then performed between the phonemes provided by the phoneme recognition module 18 and the phoneme found using the word to phoneme database 21 . The phoneme to grapheme unit then provides then the graphemes using the phoneme to grapheme database 21 , together with the word and the matched phonemes.
  • the graphemes are placed on the synchguide.
  • the graphemes are placed on the synchguide using the post-production sound recording synchguide 22 .
  • the synchguide is displayed using the display 12 .
  • step 64 a check is performed on the synchguide in order to confirm that the original synchguide is correct. If the synchguide is not correct, i.e. for instance small grammatical errors are detected; the text may be amended according to step 66 . If the synchguide is correct and according to step 68 , labials as well as other miscellaneous information is provided.
  • the user may provide a new text.
  • the new text is provided to the conformed text source 14 .
  • a new sound source may be provided using the new sound source 26 .
  • the new sound source is aligned with the new text. This step is performed by generating the phonemes related to the new text source and their related time codes and then performing the phoneme to grapheme conversion using the phoneme to grapheme unit 16 together with the word to phoneme database 20 and the phoneme to grapheme database 21 . Using the time codes generated by the phoneme to grapheme unit 16 , the new sound source is aligned with the new text.
  • step 76 at least one part of the new synchguide is then aligned with the old synchguide.
  • the alignment is performed in the preferred embodiment of the invention using the time codes.
  • the new synchguide is saved in the project database 28 .
  • step 78 of FIG. 7 the new synchguide is provided to the user.
  • the persons allowed to work on the project may work via a remote location.
  • the post-production sound recording synchguide 22 may be connected in this embodiment of the invention to a post-production sound recording synchguide server.
  • Each allowed person may then access the post-production sound recording synchguide server remotely through a Local Area Network (LAN) or through a Wide Area Network (WAN).
  • LAN Local Area Network
  • WAN Wide Area Network
  • FIG. 8 there is shown a screen shot of the user interface in this embodiment of the invention.
  • the user interface comprises a menu, a guide track 90 , a symbol menu 94 , a loop/preview box 96 , a zoom window 99 , a navigation window 100 and a script window 102 .
  • the guide track 90 enables the user to visualize the universal guide track.
  • the universal guide track comprises a list of all the current actors on the scene as well as all the sound effects that are not performed by an actor.
  • identity of the actors is detected using the database of the project 28 .
  • the dialog is synchronized with time and displayed in a manner that allows an easy post-synchronization.
  • the dialog is synchronized with time using the time codes provided with the graphemes.
  • the graphemes are placed with a letter length that corresponds to the phonemes length in order to provide an easy post-synchronization.
  • the graphemes may be placed with a letter length that corresponds to the phonemes length using a time dependant character set or using for instance a technology such as the “Truetype” technology.
  • different colors may be used to show the temporal properties of the graphemes.
  • elements such as breaths, efforts, presence and exclamations are placed on the universal guide track using special expository symbols.
  • dedicated colors are used to present a special effect.
  • Foleys are placed on a Foley track.
  • a moveable synchronizer bar enables the allowed users to view with precision the current position.
  • a user may use the moveable synchronizer bar to move to a precise position.
  • the navigation window 100 enables a user to navigate through the different parts of the project.
  • the navigation window 100 comprises a display that allows a user to find out his relative position.
  • the navigation window 100 also comprises a display that allows a user to change the current scene.
  • the navigation window 100 also provides a zoom in/out tool.
  • the navigation window 100 also provides a tool that enables speed control and an indication of the frame reference.
  • the script window 102 enables a user to have access to the conformed text.
  • the text currently spoken on the screen is highlighted.
  • a user may edit the text in the conformed text window.
  • the zoom window 99 allows a user to view the lip movements with a greater precision.
  • the invention does not comprise the conformed text source 14 .
  • the phoneme recognition module 18 may provide the phonemes with a great efficiency, as the conformed text source is not available.
  • the system comprises a video source 10 , a display 12 , a conformed text source 14 , a phoneme recognition module 18 , a post-production sound recording synchguide 22 , an audio destination 24 , a new sound source 26 and a project database 28 .
  • the video source 10 provides an audio source to the phoneme recognition module 18 and to the post-production sound recording synchguide 22 .
  • the video source 10 further provides time codes to the conformed text source, to the phoneme recognition module 18 and to the post-production sound recording synchguide 22 .
  • the video source 10 provides the video source to the display 12 and to the post-production sound recording synchguide 22 .
  • the conformed text source 14 provides the conformed text to the phoneme recognition module 18 .
  • the phoneme recognition module 18 provides the phonemes with the related time codes to the post-production sound recording synchguide 22 .
  • the phoneme recognition module 18 and the conformed text source 14 are controlled by the post-production sound recording synchguide 22 .
  • the phoneme recognition module 18 is of the same type than the one described in the first embodiment of the invention.
  • the post-production sound recording synchguide 22 provides the phonemes with their related time codes on the synchguide which is displayed by the display. More precisely, the post-production sound recording synchguide 22 provides a user interface where the phonemes are placed together with an indication of the current temporal location. The user has therefore an indication of when a sound begins and when a sound ends. It will be appreciated that this embodiment is simpler than the first embodiment but it greatly improves the prior art of “beep and wipe” systems. An alternative to this embodiment is to not include the conformed text source 14
  • Closed-captioning may be divided in two different types.
  • a first type of closed-captioning is when there is sufficient lead time between completion of a program, series or film and its transmission.
  • a file of caption can be prepared.
  • Each caption will be assigned its own unique time code cue which references back to the original master tape.
  • a second type of closed-captioning is when there is not sufficient lead time between completion of a program, series or film and its transmission. This is the case for news, current affairs programs or live broadcasts. In such cases, programs must be captioned online by a live real time steno-captioner. Unfortunately, someone skilled in the art will appreciate that there is never sufficient time to insert proper time codes, and often source materials do not contain the continuous time codes on it to trigger these cues. Consequently, if the steno-captioners are able to prepare their caption scripts just prior to transmission, it is still necessary for one of the steno-captioners to manually cue out each caption one by one. It will be appreciated that this is an inefficient use of highly skilled and valuable personnel.
  • FIG. 10 there is shown an embodiment of the invention for assisting closed-captioning.
  • an audio video source 200 provides an audio signal to a speech recognition module 202 .
  • the speech recognition module 202 provides recognized words and related time codes to a closed-caption editor 204 .
  • the closed-caption editor 204 further receives a video signal from the audio video source 200 .
  • FIG. 11 there is shown how the embodiment described in FIG. 10 operates.
  • a closed-captioning operator sets its preferences.
  • the preferences comprise at least user preferences related to a user interface.
  • the preferences may comprise additional preferences.
  • an audio video source signal is provided by the audio video source 200 .
  • step 210 speech recognition of the audio video source signal is performed by the voice recognition module 202 .
  • the speech recognition module 202 outputs recognized words and related time codes and provides the recognized words and the related time codes to the closed-caption editor 204 .
  • results from the speech recognition are analyzed according to user preferences provided at step 206 . For instance, if a rate of recognized word reaches a predetermined level, an indication to that effect is provided to a closed-captioning operator.
  • the recognized words and the related time codes are provided to the closed-captioning operator.
  • the invention reduces the time required to perform the closed-caption generation by at least automating the matching of time code cues to captions which have been prepared by an operator transcribing the audio material.
  • the time required to perform the closed-caption generation can be reduced even further by performing the automatic voice recognition.
  • a 100-minute film could be transcribed and time coded, automatically within 15 to 20 minutes, leaving the closed-captioning operator to verify manually what had previously been accomplished automatically.
  • the speech recognition performed according to step 210 is performed on a married track that contains all the dialogues, the music, effects and background or ambient noises.
  • the closed-captioning operator is able to quickly visualize the recognized words on the user interface.
  • the closed-captioning operator is then able to amend the recognized words to correct a defect or to insert a symbol related to an event.
  • the closed-captioning operator may then simply decide to rewrite new closed-captions which comply with a suitable rate.
  • the closed-captioning operator is able to control precisely a location of an element to insert or to amend.
  • the closed-captioning operator may then, in one embodiment, confirm an amendment or a recognized word by pressing a predetermined key. Upon confirmation, related data is then inserted into a Vertical Blanking Interval as known by someone skilled in the art. In an alternative embodiment, an automatic cue is performed.
  • one word at a time can be confirmed by the closed-captioning operator. In another embodiment, more than one word at a time can be confirmed.
  • the closed-captioning operator may provide words to the closed-caption editor 204 .
  • the words provided may be provided via steno data that are then translated into words according to the art.
  • an open-captioning operator may also provide abbreviations instead of words to the closed-caption editor 204 .
  • the closed-caption editor 204 may in return translate the provided abbreviation into a corresponding word, enabling a saving of time.
  • the closed-caption editor 204 further comprises a look-up database and a steno data to word translation unit.
  • the look-up database comprises a relation between an abbreviation and a word.
  • the speech recognition module 202 comprises a word database.
  • the word database may be amended by the user.
  • Such word database enables a user to introduce new words or specify a correct orthography of a word.
  • Such word database is therefore of great advantage for close captioning.
  • the speech recognition module 202 may also comprise an orthography module which highlights words that are poorly detected or for which orthography does not seem correct.
  • FIG. 12 there is shown an embodiment of the invention for assisting animation creation.
  • a voice source 216 provides a voice source signal to a speech recognition module 220 .
  • a script source 218 provides a script signal to the speech recognition module 220 .
  • the voice source signal provided by the voice source 216 is generated by an actor according to a script read.
  • the voice source signal may be provided to the speech recognition module 220 in accordance with various data formats.
  • the speech recognition module 220 provides recognized words and time codes to a computerized animation assistant 224 .
  • a visem database 222 provides a visem signal to the computerized animation assistant 224 .
  • a story board database 228 provides a story board signal to the computerized animation assistant 224 .
  • An adjusted voice track signal is provided by the computerized animation assistant 224 to an adjusted voice track database 230 .
  • FIG. 13 there is shown how this embodiment operates.
  • an animation assistant provides its preferences.
  • the preferences comprise information related to a user interface of the computerized animation assistant 224 .
  • a voice source 216 is provided.
  • An actor provides a recording according to a script.
  • the voice source signal comprises a plurality of recordings originating from various actors.
  • step 235 speech recognition is performed using at least the voice source signal provided by the voice source 216 and using the speech recognition module 220 . Recognized words and related time codes are provided to the computerized animation assistant 224 . In the preferred embodiment, the phonemes and the related time codes are also provided to the computerized animation assistant 224 .
  • visems are provided to the computerized animation assistant 224 by the visem database 222 in response to a request performed by the computerized animation assistant 224 .
  • the request comprises at least the phonemes provided by the speech recognition module 220 .
  • the story board database 228 provides a story board signal to the computerized animation assistant 224 in response to a story board request.
  • the story board request comprises at least the recognized words provided by the speech recognition module 220 to the computerized animation assistant 224 .
  • the story board signal provided relates to at least one part of the story board related to the recognized words provided.
  • step 240 animation information is provided to a user interface using the computerized animation assistant 224 .
  • the animation information provided to the user interface is intended to enable an easier and quicker creation of animation.
  • the animation information comprises a sequence of recognized words with related part of the storyboard; the animation information further comprises related visem for each of the recognized words in a frame in order to facilitate the work of the draftsman. It will be appreciated by someone skilled in the art that such a tool enables the draftsman to precisely locate where an animation drawing must be made and further provides the draftsman with a clear indication of how a drawing should be made according to the visems provided.
  • the computerized animation assistant provides an adjusted voice track signal to the adjusted voice track database 230 .
  • FIG. 14 there is shown another embodiment of the invention.
  • an adaptation of an audiovisual work is performed.
  • a voice source 282 provides a voice source signal to a speech recognition module 284 .
  • An adapted voice source 280 provides an adapted voice source signal to the speech recognition module 284 .
  • the adapted voice source 280 is preferably provided by a user adapting the voice source.
  • the speech recognition module 284 performs a speech recognition of the voice source signal and of the adapted voice source signal.
  • the speech recognition module 284 provides recognized voice source signal with voice source related time codes to a recognized data analysis unit 286 .
  • the speech recognition module 284 further provides recognized adapted voice source signal with adapted voice source related time codes to the recognized data analysis unit 286 .
  • the recognized data analysis unit 286 performs an analysis of the received recognized adapted voice source signal with the adapted voice source related time codes and the recognized voice source signal with the voice source related time codes.
  • the recognized data analysis unit 286 provides an analysis result signal to a recognized data matching unit 288 .
  • the analysis result provides an indication on whether it is possible to match the adapted voice source signal and the voice source signal using the voice source related time codes and the adapted voice source related time codes.
  • the recognized data analysis unit 286 operates by trying to match the adapted voice source signal and the voice source signal using phonemes.
  • the match is performed using visems.
  • the recognized data matching unit 288 receives a user defined criteria signal.
  • the user defined criteria signal provides an indication of a level of adaptation synchronization quality required by a user.
  • the recognized data matching unit 288 provides an adapted voice source signal.
  • FIG. 15 there is shown a flow chart which shows how an adaptation is performed.
  • a user sets its preferences.
  • the setting of the preferences comprises a providing of a user defined criteria signal to the recognized data matching unit 288 .
  • the user defined criteria signal is indicative of a level of adaptation synchronization required by the user.
  • the voice source signal originating from the voice source 282 , is provided to the speech recognition module 284 .
  • step 254 a speech recognition of the voice source signal is performed by the speech recognition module 284 .
  • an adapted voice source is provided by an operator.
  • the adapted voice source is created by adapting the voice source signal provided by the voice source 282 .
  • the speech recognition module outputs the recognized voice source signal and the related recognized voice source time codes.
  • a speech recognition of the adapted voice source signal is performed by the speech recognition module 284 .
  • the speech recognition module outputs the recognized adapted voice source signal and the related recognized adapted voice source time codes.
  • step 260 an attempt is made to match the recognized adapted voice source signal and the recognized voice source signal.
  • the attempt is made by using the related recognized voice source time codes and the related recognized adapted voice source time codes.
  • the user defined criteria signal is also used to assist adaptation synchronization.
  • an indication of confidence is provided to the user.
  • the indication of confidence provides an indication of an amount of time codes matched between the related recognized adapted voice source time codes and the related recognized voice source time codes.
  • step 264 a recording of the result of the adaptation is performed.
  • a user may provide a minimum amount of time codes to be matched.
  • the recording of the result of the adaptation may be cancelled if the minimum amount of time codes to be matched is not met.
  • the invention may be advantageously used for assisting adaptation by attempting to match time codes.
  • Karaoke is a form of entertainment that originated in Japan twenty years ago and which means “empty orchestra”. It is an abbreviation of Karappo Okesutura—Kara translates to empty and Oke translates to orchestra.
  • FIG. 16 there is shown an embodiment of the invention for assisting Karaoke generation.
  • An audio/video source 300 provides an audio signal to a speech recognition module 310 .
  • the speech recognition module 310 performs a speech recognition of the audio signal provided and generates recognized voice source signal with related time codes.
  • the recognized voice source signal with related time codes are then received by the recognized voice source formatting unit 312 .
  • the voice source formatting unit 312 also receives a video source signal from the audio/video source 300 and a music source signal.
  • the recognized voice source formatting unit 312 generates a combined video signal comprising formatted text, the music signal and at least one part of the video source signal provided and provides the combined video signal to a display 314 .
  • the recognized voice source formatting unit 312 provides a formatted text synchronized with the video source signal and with the music using the recognized voice source signal and the related time codes.
  • a marker is used to locate exactly a current temporal location on the formatted text with respect to music played.
  • the speech recognition module 310 provides the music without lyrics and respective time codes.
  • the formatted text is then displayed on the display 314 , in a precise manner, using a time dependant character set and an horizontal font or it can be combined with computer generated animation.
  • computer generated animation may be used to enhance the display, in order to have an entertaining display as well.
  • FIG. 17 there is shown how assisting Karaoke is performed.
  • step 320 a user sets his preferences.
  • an audio/video source 300 is provided.
  • the audio/video source 300 may be provided using a plurality of medium.
  • the audio/video source 300 comprises an audio source signal and a video source signal.
  • a speech recognition of the audio source signal is performed by the speech recognition module 310 .
  • step 326 recognized speech and time codes, originating from the speech recognition module 310 , are used to generate the combined video signal.
  • the combined video signal comprises formatted text, the music signal and at least one part of the video source signal provided.
  • step 328 formatted text and at least one part of the video signal is displayed on the display 314 .
  • Music without the lyrics is also provided.
  • a composer/conductor may wish to insert notes or any indications that may be required to further create or amend music.
  • the insertion of notes or any indications are performed according to a specific insertion scheme and are further detected in an audiovisual work.
  • the notes and the indications are provided together with related time codes to a display.
  • the notes and the indications, provided together with related time codes, are then used to further amend music in the audiovisual work.

Abstract

A method and apparatus is disclosed for producing an audiovisual work. The method and apparatus is based on speech recognition. Extraction of basic units of speech with related time code is performed. The invention may be advantageously used for performing post-production synchronization of a video source, dubbing assisting, closed-captioning assisting and animation generation assisting.

Description

  • The present application is a continuation-in-part of U.S. application Ser. No. 10/067,131 filed on Sep. 12, 2001 designating the United States of America now pending, the specification of which is hereby incorporated by reference. The present application is also a continuation of PCT/CA02/01386 filed on Sep. 12, 2002, designating the United States, now pending and the specification of which is hereby incorporated by reference.[0001]
  • FIELD OF THE INVENTION
  • This invention pertains to the field of what is commonly referred to as speech recognition. More precisely, this invention provides a method and an apparatus for performing an audiovisual work using synchronized recognition data. [0002]
  • BACKGROUND OF THE INVENTION
  • The system for post-synchronization that is used throughout most of the world is based on what is called a “beep-and-wipe” system. In a recording studio, the actor is given earphones, through which the dialog is fed. [0003]
  • An audible beep is sent as a signal to signify the beginning of the line to be re-recorded. A visual indicator, called a wipe, is superimposed on the screen as a visual indication of when to begin and stop. A series of takes are recorded, sometimes as many as 24, and are given to the editor in order to verify by eye or by trying to match the sound waves of the original production take with that of the newly recorded ones. Most of the editing is, in the end, totally dependent on the experienced eye and ear of the human operators. The method used for film dubbing in the greater part of the world is the same, except in the United States where the voice of the translator is fed into one of the earphones while the other carries the mixed track of dialog from the original language. The norm for the recording of dialog using this method is between ten to twelve lines of text per hour of studio time. [0004]
  • The system used in France, Quebec, and South Africa consists in taking the film that is to be post-synchronized (or dubbed) and transferring it to either a three quarter inch or a half inch video tape. The video is fed from a VCR to a special machine, called a detection machine, that links a roll of white 35 mm leader film with the VCR so that they run synchronously with each other. A detection of the scene cuts, and all the lip movements and dialog is then performed of the original language. A highly skilled craftsperson, called a detector, then proceeds to write with a pencil, on the strip of white leader. The detector copies the original language of the film dialog, following the precise movements of the lips and matches them to the spoken word. During this process, a particular emphasis is laid on a precise matching of the labials and semi-labials. A calligrapher then runs a strip of clear 35 mm leader on top, that is matched sprocket to sprocket with the original white strip underneath. The two rolls are then run simultaneously on a small-geared table. After the rolls are locked, the calligrapher proceeds to copy the detection on the clear leader using a special pen and India ink. When this is completed, the calligraphied dialog is typed by a typist into a computer and copies of the text are printed for the director, the recording engineer, and the actors. The problems inherent with this system are that they are inefficient in their consumption of time and “man hours”. Approximately 150 “man hours” are needed to complete all the operations for a “feature length film” (i.e. a film ranging from 90 to 100 minutes in running time). Since these operations are dependent upon a number of hands, they are open to errors and inaccuracies in the detection process and the calligraphy. After the recording sessions are completed, an editor works on the dialog tracks, adjusting the synchronization. When that is completed to everyone's satisfaction, a final mix of the tracks is done, and the script is re-conformed and is tabled for distribution. [0005]
  • The U.S. Pat. No. 5,732,184 teaches a system for the editing of video and audio sequences, and relates only to a system for editing video clips, or small portions of video, and sound clips based on short sections of sound waves displayed on a video screen. The cursor is able to display no more than three frames of video and sound at the same time in one direction or the other. The cursor then becomes an aid to identifying the material only. [0006]
  • Published GB Patent application GB 2,101,795 relates to dubbing translation of soundtracks on film. This invention depends upon an ability to provide histograms, or a digital representation, of the sound amplitude. Somewhat difficult for the actors, as it is like asking them to learn a whole new alphabet. The invention also suggests that recorded material can be electronically shaped to fit the lip movement in order to produce a more natural speech. Unfortunately, it is known, in light of the current technology, that any reshaping that is not minimal will only distort the sound and will not therefore provide a natural sound. Each section, or loop of film, requires that it is manually operated by a trained user. [0007]
  • In the French patent publication 2,765,354, a system is disclosed and allows dubbing into French from other languages. This invention is also used to match the new French dialog to the images. Unfortunately, the system disclosed is slow and time consuming, as it is not automatic and requires manual input. It provides a maximum of 6 usable lines on a timeline. Furthermore, it also does not allow any modifications to be made since the dialog has already been permanently encrusted on the picture. It requires the performers to learn a whole new language of symbols different from the symbols normally used in the standard manual form of operation. [0008]
  • The international publication WO98/101860 provides a fairly simple device that attempts to use a computerized calligraphy of the dialogs. Its primary market is actually the home-entertainment or classroom games market. This device allows the player to substitute their voice for the one on the screen, using a basic recording device. [0009]
  • The “beep-and-wipe” system (in ADR, or Automatic Dialog Replacement) that is currently used throughout the world, is a system that is learned by performers, who then must develop proficiency for it. Otherwise, it becomes rather tedious, frustrating, and time consuming. Actors must do it instinctively, i.e. they must learn to anticipate when to begin taking into account the fact that it takes the human brain {fraction (1/20)}th of a second to decode what the eyes have seen and then, the time it takes for the actor to respond to what he or she has just seen would put the synchronization out approximately 1½ frames. The amount of text that can be said by the actor is limited in terms of time because it is based on the individual actor's retentive powers. The actor who begins his line late realizes it, and tries to catch up by the end of the sentence, making it very difficult to edit. This means that many takes have to be recorded, causing the editor to spend large quantities of time piecing together the final take. The time required by, not only the actor but by the director, the studio engineer, the editor, plus the cost of the studio itself will only create a greater expense of both time and money. An expense that could be avoided. [0010]
  • Spot editing is the editing in the studio by the studio engineer, who tries to match or tailor the waveforms of the original dialog with the newly recorded one. While some spot editing can be done in studio by trying to match waveforms, the drawbacks to this are that it requires some training and knowledge in the ability to read the waveforms so as to be able to properly match them, and also if there is too much variation in the tailoring of the waveforms, it will ultimately cause a distortion in the sound. [0011]
  • The human factor is very important in the current post-synchronization methods used around the world. Operators must be highly trained. Experienced operators are therefore needed as such methods rely on the capacity of the operators to interact and to react with the system, therefore the quality of the post-synchronization performed may vary from time to time. Furthermore these methods are very time consuming, and therefore are very costly. [0012]
  • Accordingly, there is a need for a method and apparatus that will overcome the above-mentioned drawbacks. [0013]
  • SUMMARY OF THE INVENTION
  • It is an object of the invention to provide a method and apparatus for achieving a synchronization of speech recognition data with time. [0014]
  • It is another object of the invention to provide a method and apparatus for achieving post-production synchronization for film and video that will enable an operator to anticipate dialog. [0015]
  • Yet another object of the invention is to provide a method and apparatus for achieving post-production synchronization for film and video without repeatedly moving backward in time. [0016]
  • It is another object of the invention to assist animation production. [0017]
  • It is another object of the invention to assist karaoke production. [0018]
  • Yet another object of the invention is to assist adaptation of an audiovisual work. [0019]
  • Yet another object of the invention is to assist closed-caption generation. [0020]
  • According to an aspect of the invention, there is provided a method for producing an audiovisual work, the method comprising the steps of providing an audio signal to a speech recognition module, performing a speech recognition of said audio signal, the speech recognition comprising an extracting of a plurality of basic units of recognized speech and related time codes, receiving the plurality of basic units of recognized speech and the related time codes from the speech recognition module, processing the received plurality of basic units to provide synchronization information for a production of said audiovisual work, and displaying on a user interface said synchronization information. [0021]
  • According to another aspect of the invention, there is provided a method for performing closed-captioning of an audio source, the method comprising the steps of providing an audio signal of an audio/video signal to a speech recognition module, performing a speech recognition of said audio/video signal, and incorporating text of said recognized speech of the audio signal as closed-captioning into a visual or non-visual portion of the audio/video signal in synchronization. [0022]
  • According to another aspect of the invention, there is provided an apparatus for producing an audiovisual work comprising a speech recognition module receiving an audio signal and providing a plurality of basic units of recognized speech and related time codes, means for processing the plurality of basic units to provide synchronization information for a production of said audiovisual work; and means for displaying on a user interface said synchronization information. [0023]
  • According to another aspect of the invention, there is provided an apparatus for performing closed-captioning of an audio source, the apparatus comprising a speech recognition module receiving an audio signal and providing recognized speech, incorporating means for incorporating text of said recognized speech of the audio signal as closed-captioning into a visual or non-visual portion of the audio/video signal in synchronization. [0024]
  • A “rythmo band” is a clear band of 35 mm which is written in India ink by a calligrapher and is projected in the recording studio by means of a specifically designed overhead projector and is run locked in synchronization with a projected image. The “rythmo band” comprises the script and the translated script in the case of film dubbing.[0025]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention will be better understood by way of the following description of the preferred embodiment, together with the accompanying drawings, in which: [0026]
  • FIG. 1 is a flow chart of the preferred embodiment of the invention; [0027]
  • FIG. 2 is a block diagram of one embodiment of the invention; the apparatus comprises a video source, a conformed text source, a phoneme to grapheme unit, a phoneme recognition module, a post-production sound recording synchguide, a new sound source, a project database; [0028]
  • FIG. 3 is a flow chart of one embodiment of the invention; [0029]
  • FIG. 4 is a flow chart which shows how a project environment is set; [0030]
  • FIG. 5 is a flow chart which shows how the script is prepared; [0031]
  • FIG. 6 is a flow chart which shows how the synchguide is prepared; [0032]
  • FIG. 7 is a flow chart which shows how the synchguide may be changed; [0033]
  • FIG. 8 is screenshot which shows the user interface in one embodiment of the invention; [0034]
  • FIG. 9 is a block diagram of another embodiment of the invention; in this embodiment, the apparatus comprises a video source, a conformed text source, a phoneme recognition module; a post-production sound recording synchguide, a new sound source, a project database; [0035]
  • FIG. 10 is a block diagram of another embodiment of the invention for assisting closed-caption generation; in this embodiment, the apparatus comprises an audio video source, a speech recognition module and a closed-caption editor; [0036]
  • FIG. 11 is a flow chart which shows how the assisting closed-caption generation is performed; [0037]
  • FIG. 12 is a block diagram of another embodiment of the invention for assisting animation production; the apparatus comprises a speech recognition module, a computerized animation assistant, a visem database, a storyboard database; [0038]
  • FIG. 13 is a flow chart which shows how the assisting animation production is performed; [0039]
  • FIG. 14 is a block diagram which shows another embodiment of the invention for assisting adaptation of an audiovisual work; [0040]
  • FIG. 15 is a flow chart which shows another embodiment of the invention for assisting adaptation of an audiovisual work; [0041]
  • FIG. 16 is a block diagram which shows another embodiment of the invention for assisting Karaoke generation; and [0042]
  • FIG. 17 is a flow chart which shows another embodiment of the invention for assisting Karaoke generation.[0043]
  • DETAILED DESCRIPTION
  • In the present application, the word “synchguide” will be introduced and will relate to an extended version of the concept of a “rythmo band”. [0044]
  • Now referring to FIG. 1, there is shown the preferred embodiment of the invention. [0045]
  • According to [0046] step 2, an audio signal is provided. The audio signal comprises at least speech data. The audio signal may further comprise Foley. The audio signal originates from an audio signal source.
  • According to [0047] step 4, speech recognition is performed using the audio signal provided by the audio signal source. In one embodiment of the invention, the speech recognition provides an element indicative of a speech source.
  • According to [0048] step 6, basic units of recognized speech are extracted with related time codes. In the preferred embodiment, the basic units are phonemes of the recognized speech. In the preferred embodiment of the invention, each phoneme is provided with two related time codes indicative of a beginning of the phoneme and of a end of the phoneme with respect to a common time origin.
  • According to [0049] step 8, the recognized phonemes and the related time codes are aligned with data in an user interface. The user interface comprises at least a time scale to which the recognized phonemes are aligned to. The user interface may comprise various other information depending on a targeted use.
  • Post-Production Assistant [0050]
  • Now referring to FIG. 2, there is shown one specific embodiment of the invention. This specific embodiment of the invention is intended to assist post-production operations. [0051]
  • In this specific embodiment, the invention comprises a [0052] video source 10, a display 12, a conformed text source 14, a phoneme to grapheme unit 16, a phoneme recognition module 18, a word to phoneme database 20, a phoneme to grapheme database 21, a post-production sound recording synchguide 22, a video destination 24, a new sound source 26 and a project database 28.
  • The [0053] video source 10 provides a video stream to the display and to the post-production sound recording synchguide 22. The video source 10 also provides an audio source to the post-production sound editor 22 and to the phoneme recognition module 18. The video source 10 provides time codes to the conformed text source 14. In this embodiment of the invention, the time codes provided have a common time origin.
  • The conformed [0054] text source 14 is controlled by the post-production sound recording synchguide 22. The conformed text source 14 provides conformed text and time codes to the phoneme recognition module 18 and to the phoneme to grapheme unit 16. The phoneme recognition module 18 is a standard voice recognition module that provides phonemes as well as time codes. Someone skilled in the Art of voice recognition will have sufficient information to select an appropriate phoneme recognition module.
  • The phoneme to [0055] grapheme unit 16 receives the phonemes and the time codes from the phoneme recognition module 18. The phoneme recognition module 18 is controlled by the post-production sound recording synchguide 22. In the preferred embodiment of the invention, each phoneme provided by phoneme recognition module 18 has two time codes. One of the two time codes is dedicated to the beginning of the phoneme; the other of the two time codes is dedicated to the end of the phoneme.
  • The phoneme to [0056] grapheme unit 16 provides at least the graphemes with the time codes to the post-production sound recording synchguide 22. Two time codes are dedicated for each grapheme in the preferred embodiment of the invention. The first time code of the two time codes is dedicated to the beginning of the grapheme, while the second time code of the two time codes is dedicated to the end of the grapheme.
  • The post-production [0057] sound recording synchguide 22 receives the graphemes with the time codes from the phoneme to grapheme unit 16, a new audio source from the new audio source 26 and provides the results to the audio destination 24. The post-production sound recording synchguide 22 is connected to the project database 28.
  • The phoneme to [0058] grapheme unit 16 is connected to the word to phoneme database 20 and to the phoneme to grapheme database 21.
  • The word to [0059] phoneme database 20 comprises a list of words with their corresponding phonemes.
  • The phoneme to [0060] grapheme database 21 comprises a list of words where the phonemes of each word are mapped to the corresponding graphemes.
  • The [0061] project database 28 comprises data about the project as explained below.
  • Now referring to FIG. 3, there is shown another embodiment of the invention. According to step [0062] 30 of FIG. 3, the project environment is set. The script is then prepared according to step 32 of FIG. 3. The synchguide is prepared according to step 34 of FIG. 3. Then, according to step 36, the synchguide is modified. According to step 38, information related to the project is generated.
  • Now referring to FIG. 4, there is shown how the project environment is setup. According to step [0063] 40, the global parameters for the project are entered. The global parameters comprise, but are not limited to, the name of the project, the type of project, the identity as well as the access login and password of the persons allowed to work on the project, etc.
  • According to step [0064] 42, the project parameters are entered. The project parameters comprise, and are not limited to, the name and the location of the video source, the name and the location of the audio source. In the preferred embodiment of the invention, the global parameters and the project parameters are provided to the post-production sound recording synchguide 22 and stored in the project database 28.
  • According to step [0065] 44, the project is planned. The step of planning the project comprises the step of assigning a time schedule to the persons allowed to work on the project.
  • Now referring to FIG. 5, there is shown how the script is prepared in the preferred embodiment of the invention. According to step [0066] 48, the script is conformed. According to step 50, the script is formatted. According to step 52, a part of the formatted script is selected. The part of the formatted script is selected from the conformed text source 14 using the control of the post-production sound recording synchguide 22.
  • Now referring to FIG. 6, there is shown how the synchguide is prepared. [0067]
  • According to step [0068] 58, the script is provided to the phoneme recognition module 18. According to step 60, phonemes are generated by the phoneme recognition module 18 using at least the provided script and time codes. According to step 62, graphemes are generated using the phoneme to grapheme unit 16, the word to phoneme database 20 and the phoneme to grapheme database 21. In the preferred embodiment, graphemes are generated with their related time codes. More precisely, the phoneme to grapheme unit 16 receives a word from the conformed text source 14; the phonemes of the words provided by the conformed text source 14 are then found using the word to phoneme database 21. The phoneme to grapheme unit 16 also receives the phonemes as well as time codes from the phoneme recognition module 18. A match is then performed between the phonemes provided by the phoneme recognition module 18 and the phoneme found using the word to phoneme database 21. The phoneme to grapheme unit then provides then the graphemes using the phoneme to grapheme database 21, together with the word and the matched phonemes.
  • According to step [0069] 63, the graphemes are placed on the synchguide. In this embodiment of the invention, the graphemes are placed on the synchguide using the post-production sound recording synchguide 22. The synchguide is displayed using the display 12.
  • According to step [0070] 64, a check is performed on the synchguide in order to confirm that the original synchguide is correct. If the synchguide is not correct, i.e. for instance small grammatical errors are detected; the text may be amended according to step 66. If the synchguide is correct and according to step 68, labials as well as other miscellaneous information is provided.
  • Now referring to FIG. 7, there is shown how the synchguide may be modified. [0071]
  • According to step [0072] 70, the user may provide a new text. The new text is provided to the conformed text source 14. According to step 72, a new sound source may be provided using the new sound source 26. According to step 74, the new sound source is aligned with the new text. This step is performed by generating the phonemes related to the new text source and their related time codes and then performing the phoneme to grapheme conversion using the phoneme to grapheme unit 16 together with the word to phoneme database 20 and the phoneme to grapheme database 21. Using the time codes generated by the phoneme to grapheme unit 16, the new sound source is aligned with the new text.
  • According to step [0073] 76, at least one part of the new synchguide is then aligned with the old synchguide. The alignment is performed in the preferred embodiment of the invention using the time codes.
  • In another embodiment of the invention, the new synchguide is saved in the [0074] project database 28.
  • According to step [0075] 78 of FIG. 7, the new synchguide is provided to the user.
  • In another embodiment of the invention, the persons allowed to work on the project may work via a remote location. The post-production [0076] sound recording synchguide 22 may be connected in this embodiment of the invention to a post-production sound recording synchguide server. Each allowed person may then access the post-production sound recording synchguide server remotely through a Local Area Network (LAN) or through a Wide Area Network (WAN).
  • Now referring to FIG. 8, there is shown a screen shot of the user interface in this embodiment of the invention. [0077]
  • In this embodiment of the invention, the user interface comprises a menu, a [0078] guide track 90, a symbol menu 94, a loop/preview box 96, a zoom window 99, a navigation window 100 and a script window 102.
  • The [0079] guide track 90 enables the user to visualize the universal guide track. The universal guide track comprises a list of all the current actors on the scene as well as all the sound effects that are not performed by an actor.
  • In one embodiment of the invention, identity of the actors is detected using the database of the [0080] project 28. For each actor the corresponding dialog is provided. The dialog is synchronized with time and displayed in a manner that allows an easy post-synchronization. In the preferred embodiment, the dialog is synchronized with time using the time codes provided with the graphemes.
  • The graphemes are placed with a letter length that corresponds to the phonemes length in order to provide an easy post-synchronization. The graphemes may be placed with a letter length that corresponds to the phonemes length using a time dependant character set or using for instance a technology such as the “Truetype” technology. In another embodiment, different colors may be used to show the temporal properties of the graphemes. [0081]
  • Preferably, elements such as breaths, efforts, presence and exclamations are placed on the universal guide track using special expository symbols. In another embodiment, dedicated colors are used to present a special effect. In this embodiment, Foleys are placed on a Foley track. [0082]
  • A moveable synchronizer bar enables the allowed users to view with precision the current position. A user may use the moveable synchronizer bar to move to a precise position. [0083]
  • The [0084] navigation window 100 enables a user to navigate through the different parts of the project. The navigation window 100 comprises a display that allows a user to find out his relative position. The navigation window 100 also comprises a display that allows a user to change the current scene. The navigation window 100 also provides a zoom in/out tool. The navigation window 100 also provides a tool that enables speed control and an indication of the frame reference.
  • The [0085] script window 102 enables a user to have access to the conformed text. The text currently spoken on the screen is highlighted. A user may edit the text in the conformed text window.
  • The [0086] zoom window 99, allows a user to view the lip movements with a greater precision.
  • In a first alternative embodiment, the invention does not comprise the conformed [0087] text source 14. In this embodiment, the phoneme recognition module 18 may provide the phonemes with a great efficiency, as the conformed text source is not available.
  • Now referring to FIG. 9, there is shown another alternative embodiment. In this embodiment, the system comprises a [0088] video source 10, a display 12, a conformed text source 14, a phoneme recognition module 18, a post-production sound recording synchguide 22, an audio destination 24, a new sound source 26 and a project database 28.
  • The [0089] video source 10 provides an audio source to the phoneme recognition module 18 and to the post-production sound recording synchguide 22. The video source 10 further provides time codes to the conformed text source, to the phoneme recognition module 18 and to the post-production sound recording synchguide 22. The video source 10 provides the video source to the display 12 and to the post-production sound recording synchguide 22.
  • The conformed [0090] text source 14 provides the conformed text to the phoneme recognition module 18. In this embodiment, the phoneme recognition module 18 provides the phonemes with the related time codes to the post-production sound recording synchguide 22. The phoneme recognition module 18 and the conformed text source 14 are controlled by the post-production sound recording synchguide 22. The phoneme recognition module 18 is of the same type than the one described in the first embodiment of the invention.
  • In this embodiment, the post-production [0091] sound recording synchguide 22 provides the phonemes with their related time codes on the synchguide which is displayed by the display. More precisely, the post-production sound recording synchguide 22 provides a user interface where the phonemes are placed together with an indication of the current temporal location. The user has therefore an indication of when a sound begins and when a sound ends. It will be appreciated that this embodiment is simpler than the first embodiment but it greatly improves the prior art of “beep and wipe” systems. An alternative to this embodiment is to not include the conformed text source 14
  • Closed-Captioning Assistant [0092]
  • Closed-captioning may be divided in two different types. [0093]
  • A first type of closed-captioning, also named “offline-captioning”, is when there is sufficient lead time between completion of a program, series or film and its transmission. In such a case, a file of caption can be prepared. Each caption will be assigned its own unique time code cue which references back to the original master tape. [0094]
  • Someone skilled in the art will appreciate that it takes an experienced steno-captioner about 9 hours to close-caption a 22 minute, half an hour program. It takes the same experienced person, depending on the complexity and degree of difficulty, about 5 times that to do a 100-minute feature film or 45 hours. Someone skilled in the art will appreciate that inserting the time codes alone can take up to 20% of the total time to prepare the file of caption. [0095]
  • A second type of closed-captioning, also named “online-captioning”, is when there is not sufficient lead time between completion of a program, series or film and its transmission. This is the case for news, current affairs programs or live broadcasts. In such cases, programs must be captioned online by a live real time steno-captioner. Unfortunately, someone skilled in the art will appreciate that there is never sufficient time to insert proper time codes, and often source materials do not contain the continuous time codes on it to trigger these cues. Consequently, if the steno-captioners are able to prepare their caption scripts just prior to transmission, it is still necessary for one of the steno-captioners to manually cue out each caption one by one. It will be appreciated that this is an inefficient use of highly skilled and valuable personnel. [0096]
  • According to the invention, and as explained below, it is possible to automatically cue online air captioning device and again free up person or persons to prepare following broadcast material. [0097]
  • Now referring to FIG. 10, there is shown an embodiment of the invention for assisting closed-captioning. [0098]
  • In this embodiment, an [0099] audio video source 200 provides an audio signal to a speech recognition module 202. The speech recognition module 202 provides recognized words and related time codes to a closed-caption editor 204. The closed-caption editor 204 further receives a video signal from the audio video source 200.
  • Now referring to FIG. 11, there is shown how the embodiment described in FIG. 10 operates. [0100]
  • According to step [0101] 206, a closed-captioning operator sets its preferences. The preferences comprise at least user preferences related to a user interface. The preferences may comprise additional preferences.
  • According to step [0102] 208, an audio video source signal is provided by the audio video source 200.
  • According to step [0103] 210, speech recognition of the audio video source signal is performed by the voice recognition module 202. The speech recognition module 202 outputs recognized words and related time codes and provides the recognized words and the related time codes to the closed-caption editor 204.
  • According to step [0104] 212, results from the speech recognition are analyzed according to user preferences provided at step 206. For instance, if a rate of recognized word reaches a predetermined level, an indication to that effect is provided to a closed-captioning operator.
  • According to step [0105] 214, the recognized words and the related time codes are provided to the closed-captioning operator. It will be appreciated by someone skilled in the art that the invention reduces the time required to perform the closed-caption generation by at least automating the matching of time code cues to captions which have been prepared by an operator transcribing the audio material. Furthermore, it will be appreciated that the time required to perform the closed-caption generation can be reduced even further by performing the automatic voice recognition. According to the invention, a 100-minute film could be transcribed and time coded, automatically within 15 to 20 minutes, leaving the closed-captioning operator to verify manually what had previously been accomplished automatically.
  • It will be therefore appreciated that the saving of time generated by this advantageous embodiment will be easily translated into the time banking of valuable personnel. [0106]
  • In the preferred embodiment of the invention, the speech recognition performed according to [0107] step 210 is performed on a married track that contains all the dialogues, the music, effects and background or ambient noises.
  • In this embodiment, the closed-captioning operator is able to quickly visualize the recognized words on the user interface. The closed-captioning operator is then able to amend the recognized words to correct a defect or to insert a symbol related to an event. In the case of a large rate of recognized word, the closed-captioning operator may then simply decide to rewrite new closed-captions which comply with a suitable rate. [0108]
  • As the recognized words are provided with related time codes, the closed-captioning operator is able to control precisely a location of an element to insert or to amend. [0109]
  • It will be appreciated that closed-captioning remains synchronized with time as recognized words are provided with related time code. [0110]
  • The closed-captioning operator may then, in one embodiment, confirm an amendment or a recognized word by pressing a predetermined key. Upon confirmation, related data is then inserted into a Vertical Blanking Interval as known by someone skilled in the art. In an alternative embodiment, an automatic cue is performed. [0111]
  • In a simpler embodiment, no feedback is provided by the closed-captioning operator. The recognized words are directly inserted with related time codes in the Vertical Blanking Interval. [0112]
  • In another embodiment, one word at a time can be confirmed by the closed-captioning operator. In another embodiment, more than one word at a time can be confirmed. [0113]
  • Alternatively, the closed-captioning operator may provide words to the closed-[0114] caption editor 204. The words provided may be provided via steno data that are then translated into words according to the art. In such a case, an open-captioning operator may also provide abbreviations instead of words to the closed-caption editor 204. The closed-caption editor 204 may in return translate the provided abbreviation into a corresponding word, enabling a saving of time. In such embodiment, the closed-caption editor 204 further comprises a look-up database and a steno data to word translation unit. The look-up database comprises a relation between an abbreviation and a word.
  • In an alternative embodiment, the [0115] speech recognition module 202 comprises a word database. The word database may be amended by the user. Such word database enables a user to introduce new words or specify a correct orthography of a word. Such word database is therefore of great advantage for close captioning.
  • The [0116] speech recognition module 202 may also comprise an orthography module which highlights words that are poorly detected or for which orthography does not seem correct.
  • Animation Assistant [0117]
  • Now referring to FIG. 12, there is shown an embodiment of the invention for assisting animation creation. [0118]
  • In this embodiment, a [0119] voice source 216 provides a voice source signal to a speech recognition module 220. A script source 218 provides a script signal to the speech recognition module 220. The voice source signal provided by the voice source 216 is generated by an actor according to a script read. The voice source signal may be provided to the speech recognition module 220 in accordance with various data formats.
  • The [0120] speech recognition module 220 provides recognized words and time codes to a computerized animation assistant 224. A visem database 222 provides a visem signal to the computerized animation assistant 224. A story board database 228 provides a story board signal to the computerized animation assistant 224.
  • An adjusted voice track signal is provided by the [0121] computerized animation assistant 224 to an adjusted voice track database 230.
  • Now referring to FIG. 13, there is shown how this embodiment operates. [0122]
  • According to step [0123] 232, an animation assistant provides its preferences. The preferences comprise information related to a user interface of the computerized animation assistant 224.
  • According to step [0124] 234, a voice source 216 is provided. An actor provides a recording according to a script.
  • In one embodiment the voice source signal comprises a plurality of recordings originating from various actors. [0125]
  • According to step [0126] 235, speech recognition is performed using at least the voice source signal provided by the voice source 216 and using the speech recognition module 220. Recognized words and related time codes are provided to the computerized animation assistant 224. In the preferred embodiment, the phonemes and the related time codes are also provided to the computerized animation assistant 224.
  • According to step [0127] 236, visems are provided to the computerized animation assistant 224 by the visem database 222 in response to a request performed by the computerized animation assistant 224. The request comprises at least the phonemes provided by the speech recognition module 220.
  • According to step [0128] 238, the story board database 228 provides a story board signal to the computerized animation assistant 224 in response to a story board request. The story board request comprises at least the recognized words provided by the speech recognition module 220 to the computerized animation assistant 224.
  • The story board signal provided relates to at least one part of the story board related to the recognized words provided. [0129]
  • According to step [0130] 240, animation information is provided to a user interface using the computerized animation assistant 224. The animation information provided to the user interface is intended to enable an easier and quicker creation of animation.
  • More precisely, the animation information comprises a sequence of recognized words with related part of the storyboard; the animation information further comprises related visem for each of the recognized words in a frame in order to facilitate the work of the draftsman. It will be appreciated by someone skilled in the art that such a tool enables the draftsman to precisely locate where an animation drawing must be made and further provides the draftsman with a clear indication of how a drawing should be made according to the visems provided. In this embodiment, the computerized animation assistant provides an adjusted voice track signal to the adjusted [0131] voice track database 230.
  • Adaptation Assistant [0132]
  • Now referring to FIG. 14, there is shown another embodiment of the invention. In this embodiment an adaptation of an audiovisual work is performed. [0133]
  • A [0134] voice source 282 provides a voice source signal to a speech recognition module 284. An adapted voice source 280 provides an adapted voice source signal to the speech recognition module 284. The adapted voice source 280 is preferably provided by a user adapting the voice source.
  • The [0135] speech recognition module 284 performs a speech recognition of the voice source signal and of the adapted voice source signal.
  • The [0136] speech recognition module 284 provides recognized voice source signal with voice source related time codes to a recognized data analysis unit 286. The speech recognition module 284 further provides recognized adapted voice source signal with adapted voice source related time codes to the recognized data analysis unit 286.
  • The recognized [0137] data analysis unit 286 performs an analysis of the received recognized adapted voice source signal with the adapted voice source related time codes and the recognized voice source signal with the voice source related time codes. The recognized data analysis unit 286 provides an analysis result signal to a recognized data matching unit 288.
  • Preferably, the analysis result provides an indication on whether it is possible to match the adapted voice source signal and the voice source signal using the voice source related time codes and the adapted voice source related time codes. [0138]
  • More precisely, the recognized [0139] data analysis unit 286 operates by trying to match the adapted voice source signal and the voice source signal using phonemes. In another embodiment, the match is performed using visems.
  • The recognized [0140] data matching unit 288 receives a user defined criteria signal. The user defined criteria signal provides an indication of a level of adaptation synchronization quality required by a user.
  • The recognized [0141] data matching unit 288 provides an adapted voice source signal.
  • Now referring to FIG. 15, there is shown a flow chart which shows how an adaptation is performed. [0142]
  • According to step [0143] 250, a user sets its preferences. The setting of the preferences comprises a providing of a user defined criteria signal to the recognized data matching unit 288. The user defined criteria signal is indicative of a level of adaptation synchronization required by the user.
  • According to step [0144] 252, the voice source signal, originating from the voice source 282, is provided to the speech recognition module 284.
  • According to step [0145] 254, a speech recognition of the voice source signal is performed by the speech recognition module 284.
  • According to step [0146] 256, an adapted voice source is provided by an operator. As explained previously, and preferably, the adapted voice source is created by adapting the voice source signal provided by the voice source 282. The speech recognition module outputs the recognized voice source signal and the related recognized voice source time codes.
  • According to step [0147] 258, a speech recognition of the adapted voice source signal is performed by the speech recognition module 284. The speech recognition module outputs the recognized adapted voice source signal and the related recognized adapted voice source time codes.
  • According to step [0148] 260, an attempt is made to match the recognized adapted voice source signal and the recognized voice source signal. The attempt is made by using the related recognized voice source time codes and the related recognized adapted voice source time codes. The user defined criteria signal is also used to assist adaptation synchronization.
  • According to step [0149] 262, an indication of confidence is provided to the user. The indication of confidence provides an indication of an amount of time codes matched between the related recognized adapted voice source time codes and the related recognized voice source time codes.
  • According to step [0150] 264, a recording of the result of the adaptation is performed.
  • In one implementation of this embodiment, a user may provide a minimum amount of time codes to be matched. In such implementation, the recording of the result of the adaptation may be cancelled if the minimum amount of time codes to be matched is not met. [0151]
  • It will be appreciated that in this embodiment, the invention may be advantageously used for assisting adaptation by attempting to match time codes. [0152]
  • Karaoke [0153]
  • Karaoke is a form of entertainment that originated in Japan twenty years ago and which means “empty orchestra”. It is an abbreviation of Karappo Okesutura—Kara translates to empty and Oke translates to orchestra. [0154]
  • Methods currently in use today will either underline the words as they come up musically, in different colors sometimes, or they will uncover the lyrics as they pass in time to the music. [0155]
  • Now referring to FIG. 16, there is shown an embodiment of the invention for assisting Karaoke generation. [0156]
  • An audio/[0157] video source 300 provides an audio signal to a speech recognition module 310. The speech recognition module 310 performs a speech recognition of the audio signal provided and generates recognized voice source signal with related time codes. The recognized voice source signal with related time codes are then received by the recognized voice source formatting unit 312. The voice source formatting unit 312 also receives a video source signal from the audio/video source 300 and a music source signal. The recognized voice source formatting unit 312 generates a combined video signal comprising formatted text, the music signal and at least one part of the video source signal provided and provides the combined video signal to a display 314. More precisely, the recognized voice source formatting unit 312 provides a formatted text synchronized with the video source signal and with the music using the recognized voice source signal and the related time codes. A marker is used to locate exactly a current temporal location on the formatted text with respect to music played.
  • In an alternative embodiment of the invention, the [0158] speech recognition module 310 provides the music without lyrics and respective time codes.
  • Preferably, the formatted text is then displayed on the [0159] display 314, in a precise manner, using a time dependant character set and an horizontal font or it can be combined with computer generated animation. Alternatively, computer generated animation may be used to enhance the display, in order to have an entertaining display as well.
  • Now referring to FIG. 17, there is shown how assisting Karaoke is performed. [0160]
  • According to step [0161] 320, a user sets his preferences.
  • According to step [0162] 322, an audio/video source 300 is provided. The audio/video source 300 may be provided using a plurality of medium. The audio/video source 300 comprises an audio source signal and a video source signal.
  • According to step [0163] 324, a speech recognition of the audio source signal is performed by the speech recognition module 310.
  • According to step [0164] 326, recognized speech and time codes, originating from the speech recognition module 310, are used to generate the combined video signal. As explained above, the combined video signal comprises formatted text, the music signal and at least one part of the video source signal provided.
  • According to step [0165] 328, formatted text and at least one part of the video signal is displayed on the display 314. Music without the lyrics is also provided.
  • Someone skilled in the art will therefore appreciate that this embodiment is of great advantage as it provides a synchronized formatted text with respect to music. [0166]
  • Musical Guide Track [0167]
  • As someone skilled in the art will appreciate, it is of great advantage to be able to know the exact location of music in an audiovisual work. [0168]
  • Thus, in one embodiment of the invention, a composer/conductor may wish to insert notes or any indications that may be required to further create or amend music. The insertion of notes or any indications are performed according to a specific insertion scheme and are further detected in an audiovisual work. [0169]
  • Upon detection according to the insertion scheme, the notes and the indications are provided together with related time codes to a display. The notes and the indications, provided together with related time codes, are then used to further amend music in the audiovisual work. [0170]
  • It will therefore be appreciated that such embodiment, allows a total focusing on what is on the screen and enables the composer/conductor to incorporate more musical passages and visual images in the session so as to provide a greater fluidity of the music being recorded. This can be further appreciated by a reduction in studio recording time and music editing time. [0171]

Claims (33)

1. A method for producing an audiovisual work, the method comprising the steps of:
providing an audio signal to a speech recognition module;
performing a speech recognition of said audio signal, the speech recognition comprising an extracting of a plurality of basic units of recognized speech and related time codes;
receiving the plurality of basic units of recognized speech and the related time codes from the speech recognition module;
processing the received plurality of basic units to provide synchronization information for a production of said audiovisual work; and
displaying on a user interface said synchronization information.
2. The method as claimed in claim 1, wherein the production comprises post-production audio synchronization, said synchronization information comprises a graphic representation of a sound to be performed at each point in time over a span of time during said audiovisual work, and said interface controls said graphic representation over said span while facilitating synchronized recording of said sound in order to perform post-production.
3. The method as claimed in claim 2, wherein the basic units of recognized speech are phonemes.
4. The method as claimed in claim 2, further comprising the step of converting the basic units of recognized speech received with the time codes into words and words related time codes.
5. The method as claimed in claim 2, further comprising the step of converting the basic units of recognized speech received with the time codes into graphemes and graphemes related time codes, the graphemes being processed to provide synchronization information.
6. The method as claimed in claim 5, further comprising the step of providing a conformed text source, further wherein the synchronization information provided to the user comprises an indication of a temporal location with respect to the audio signal.
7. The method as claimed in claim 5, further comprising the step of providing a script of at least one part of the audio signal, further wherein the synchronization information provided to the user comprises an indication of a temporal location with respect to the script provided.
8. The method as claimed in claim 5, wherein the displaying on a user interface of said synchronization information, comprises the displaying of the graphemes using a horizontally sizeable font.
9. The method as claimed in claim 5, further comprising the step of detecting a Foley in the audio signal using a Foley detection unit, the detecting comprising the providing of an indication of the Foley and a related Foley time code.
10. The method as claimed in claim 5, further comprising the step of amending at least one part of the audio signal and audio signal related time codes using at least the graphemes and the synchronization information.
11. The method as claimed in claim 4, further comprising the providing of a plurality of words in accordance with the provided audio signal, the providing being performed by an operator.
12. The method as claimed in claim 11, further comprising the step of amending a recognized word in accordance with the plurality of words provided by the operator.
13. The method as claimed in claim 12, further comprising the step of creating a composite signal comprising at least the amended word, a video signal related to the audio source and the audio source.
14. The method as claimed in claim 1, wherein the displaying on a user interface of said synchronization information is used to produce animation.
15. The method as claimed in claim 14, wherein for blocks of continuous spoken word, said synchronization information provides essential visem information for each sequential frame to be drawn by an animator.
16. The method as claimed in claim 15, further comprising the step of providing a storyboard database, further comprising the step of converting the basic units of recognized speech received with the time codes into words and words related time codes, the processing of the plurality of words and the words related time codes providing an indication of a current temporal location of the audio signal with respect to the storyboard.
17. The method as claimed in claim 16, wherein the basic units of recognized speech are phonemes, further comprising the step of providing a plurality of visems for each of the plurality of words, using a visem database and using the phonemes.
18. The method as claimed in claim 17, further comprising the step of outputting an adjusted voice track comprising the audio signal, at least one part of the storyboard and the plurality of visems.
19. The method as claimed in claim 1, wherein the production comprises adaptation assisting, the adaptation assisting comprises a graphic representation of the plurality of basic units of recognized speech, the related time codes and a plurality of adapted basic units provided by a user, and said interface providing a visual indication of a matching of the plurality of adapted basic units with the plurality of basic speech units, the matching enabling synchronized adaptation of said audio signal.
20. The method as claimed in claim 19, wherein the plurality of adapted basic units is provided by performing a speech recognition of an adapted voice source.
21. The method as claimed in claim 20, wherein the speech recognition of the adapted voice source further provides related adapted time codes, further wherein the step of adapting the audio signal using said synchronization information and the plurality of adapted basic units is performed by attempting to match at least one of the plurality of basic units with at least one of the plurality of adapted basic units using the related time codes and the related adapted time codes.
22. A method for performing closed-captioning of an audio source, the method comprising the steps of:
providing an audio signal of an audio/video signal to a speech recognition module;
performing a speech recognition of said audio/video signal, and
incorporating text of said recognized speech of the audio signal as closed-captioning into a visual or non-visual portion of the audio/video signal in synchronization.
23. The method as claimed in claim 21 further comprising the step of providing an indication of an amount of successful replacement of the plurality of basic units of recognized speech of the audio signal by the plurality of basic units of recognized speech of the adapted audio signal.
24. The method as claimed in claim 23, further comprising the step of providing a minimum amount required of successful replacement of the plurality of basic units of recognized speech of the audio signal by the plurality of basic units of recognized speech of the adapted audio signal, the method further comprising the step of canceling the providing of the at least one replaced plurality of basic units with related replaced time codes if the at least one replaced plurality of basic units is lower than the minimum amount required of successful replacement.
25. The method as claimed in claim 1, wherein the audio signal comprises a plurality of voices originating from a plurality of actors, further comprising the step of assigning each of the plurality of basic units and the related time codes to a related actor of the plurality of actors.
26. The method as claimed in claim 1, wherein the production comprises closed-captioning production of the audio source, said closed-captioning comprises a graphic representation of the recognized plurality of basic units, the method further comprising the incorporating of at least one of the plurality of basic units as closed-captioning in a visual or non-visual portion of the audio/video portion of the audio/video signal in synchronization.
27. The method as claimed in claim 26, further comprising the step of amending at least one part of the plurality of basic units.
28. The method as claimed in claim 1, further comprising the step of converting the basic units of recognized speech received with the time codes into words and words related time codes, further comprising the step of creating a database comprising a word and related basic units.
29. The method as claimed in claim 28, further comprising the step of amending a word of said database, wherein phonemes of the word and the amended word are substantially the same.
30. The method as claimed in claim 1, further comprising the step of converting the basic units of recognized speech received with the time codes into words and words related time codes, further comprising the step of amending at least one word.
31. The method as claimed in claim 30, further comprises the step of providing a visual indication of a word to amend.
32. The method as claimed in claim 1, wherein the audio signal comprises lyrics that are sung, further wherein the production of said audiovisual work comprises a karaoke generation using said audio signal, said karaoke generation comprises a graphic representation of lyrics to be sung at each point in time over a span of time during said audiovisual work using the plurality of basic units of recognized speech provided and related time codes, together with an index representation of a current temporal position with respect to the graphic representation of the lyrics to be sung.
33. The method as claimed in claim 2, further comprising the step of detecting at least one note encoded in the audio signal according to an encoding scheme, further comprising the providing of the detected at least one note on said graphic representation.
US10/797,576 2001-09-12 2004-03-11 Method and apparatus for performing an audiovisual work using synchronized speech recognition data Abandoned US20040234250A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/797,576 US20040234250A1 (en) 2001-09-12 2004-03-11 Method and apparatus for performing an audiovisual work using synchronized speech recognition data

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US10/067,131 US7343082B2 (en) 2001-09-12 2001-09-12 Universal guide track
PCT/CA2002/001386 WO2003023765A1 (en) 2001-09-12 2002-09-12 Method and device for processing audiovisual data using speech recognition
US10/797,576 US20040234250A1 (en) 2001-09-12 2004-03-11 Method and apparatus for performing an audiovisual work using synchronized speech recognition data

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US10/067,131 Continuation-In-Part US7343082B2 (en) 2001-09-12 2001-09-12 Universal guide track
PCT/CA2002/001386 Continuation WO2003023765A1 (en) 2001-09-12 2002-09-12 Method and device for processing audiovisual data using speech recognition

Publications (1)

Publication Number Publication Date
US20040234250A1 true US20040234250A1 (en) 2004-11-25

Family

ID=22073905

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/067,131 Expired - Fee Related US7343082B2 (en) 2001-09-12 2001-09-12 Universal guide track
US10/797,576 Abandoned US20040234250A1 (en) 2001-09-12 2004-03-11 Method and apparatus for performing an audiovisual work using synchronized speech recognition data

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/067,131 Expired - Fee Related US7343082B2 (en) 2001-09-12 2001-09-12 Universal guide track

Country Status (6)

Country Link
US (2) US7343082B2 (en)
EP (1) EP1425736B1 (en)
AT (1) ATE368277T1 (en)
CA (1) CA2538981C (en)
DE (1) DE60221408D1 (en)
WO (1) WO2003023765A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040117191A1 (en) * 2002-09-12 2004-06-17 Nambi Seshadri Correlating video images of lip movements with audio signals to improve speech recognition
US20060053455A1 (en) * 2002-12-04 2006-03-09 Murali Mani Synchronization of signals
GB2424534A (en) * 2005-03-24 2006-09-27 Zootech Ltd Authoring an audiovisual production with subtitles or other captions
US20070011012A1 (en) * 2005-07-11 2007-01-11 Steve Yurick Method, system, and apparatus for facilitating captioning of multi-media content
US20080065382A1 (en) * 2006-02-10 2008-03-13 Harman Becker Automotive Systems Gmbh Speech-driven selection of an audio file
US20080256136A1 (en) * 2007-04-14 2008-10-16 Jerremy Holland Techniques and tools for managing attributes of media content
US20080255687A1 (en) * 2007-04-14 2008-10-16 Aaron Eppolito Multi-Take Compositing of Digital Media Assets
US20080263450A1 (en) * 2007-04-14 2008-10-23 James Jacob Hodges System and method to conform separately edited sequences
US20080295040A1 (en) * 2007-05-24 2008-11-27 Microsoft Corporation Closed captions for real time communication
US20080300883A1 (en) * 2007-05-30 2008-12-04 Delta Electronics, Inc. Projection Apparatus with Speech Indication and Control Method Thereof
US20100069105A1 (en) * 2005-09-28 2010-03-18 Sprint Spectrum L.P. Automatic rotation through play out of audio-clips in repsonse to detected alert events
US7747290B1 (en) 2007-01-22 2010-06-29 Sprint Spectrum L.P. Method and system for demarcating a portion of a media file as a ringtone
US8060591B1 (en) 2005-09-01 2011-11-15 Sprint Spectrum L.P. Automatic delivery of alerts including static and dynamic portions
US20130150990A1 (en) * 2011-12-12 2013-06-13 Inkling Systems, Inc. Media outline
US8713191B1 (en) 2006-11-20 2014-04-29 Sprint Spectrum L.P. Method and apparatus for establishing a media clip
US9596386B2 (en) 2012-07-24 2017-03-14 Oladas, Inc. Media synchronization
US20190096407A1 (en) * 2017-09-28 2019-03-28 The Royal National Theatre Caption delivery system
US11545134B1 (en) * 2019-12-10 2023-01-03 Amazon Technologies, Inc. Multilingual speech translation with adaptive speech synthesis and adaptive physiognomy

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401336B2 (en) 2001-05-04 2013-03-19 Legend3D, Inc. System and method for rapid image sequence depth enhancement with augmented computer-generated elements
US8897596B1 (en) 2001-05-04 2014-11-25 Legend3D, Inc. System and method for rapid image sequence depth enhancement with translucent elements
US9286941B2 (en) 2001-05-04 2016-03-15 Legend3D, Inc. Image sequence enhancement and motion picture project management system
US7343082B2 (en) * 2001-09-12 2008-03-11 Ryshco Media Inc. Universal guide track
US8009966B2 (en) * 2002-11-01 2011-08-30 Synchro Arts Limited Methods and apparatus for use in sound replacement with automatic synchronization to images
US7142250B1 (en) * 2003-04-05 2006-11-28 Apple Computer, Inc. Method and apparatus for synchronizing audio and video streams
WO2004093059A1 (en) * 2003-04-18 2004-10-28 Unisay Sdn. Bhd. Phoneme extraction system
WO2004100128A1 (en) * 2003-04-18 2004-11-18 Unisay Sdn. Bhd. System for generating a timed phomeme and visem list
JP3945778B2 (en) * 2004-03-12 2007-07-18 インターナショナル・ビジネス・マシーンズ・コーポレーション Setting device, program, recording medium, and setting method
US8179475B2 (en) * 2007-03-09 2012-05-15 Legend3D, Inc. Apparatus and method for synchronizing a secondary audio track to the audio track of a video source
US9390169B2 (en) * 2008-06-28 2016-07-12 Apple Inc. Annotation of movies
US8265450B2 (en) * 2009-01-16 2012-09-11 Apple Inc. Capturing and inserting closed captioning data in digital video
FR2955183B3 (en) * 2010-01-11 2012-01-13 Didier Calle METHOD FOR AUTOMATICALLY PROCESSING DIGITAL DATA FOR DOUBLING OR POST SYNCHRONIZATION OF VIDEOS
US8572488B2 (en) * 2010-03-29 2013-10-29 Avid Technology, Inc. Spot dialog editor
US8744239B2 (en) 2010-08-06 2014-06-03 Apple Inc. Teleprompter tool for voice-over tool
US8730232B2 (en) 2011-02-01 2014-05-20 Legend3D, Inc. Director-style based 2D to 3D movie conversion system and method
US8621355B2 (en) 2011-02-02 2013-12-31 Apple Inc. Automatic synchronization of media clips
US9407904B2 (en) 2013-05-01 2016-08-02 Legend3D, Inc. Method for creating 3D virtual reality from 2D images
US9241147B2 (en) 2013-05-01 2016-01-19 Legend3D, Inc. External depth map transformation method for conversion of two-dimensional images to stereoscopic images
US9288476B2 (en) 2011-02-17 2016-03-15 Legend3D, Inc. System and method for real-time depth modification of stereo images of a virtual reality environment
US9282321B2 (en) 2011-02-17 2016-03-08 Legend3D, Inc. 3D model multi-reviewer system
US9007365B2 (en) 2012-11-27 2015-04-14 Legend3D, Inc. Line depth augmentation system and method for conversion of 2D images to 3D images
US9547937B2 (en) 2012-11-30 2017-01-17 Legend3D, Inc. Three-dimensional annotation system and method
US9007404B2 (en) 2013-03-15 2015-04-14 Legend3D, Inc. Tilt-based look around effect image enhancement method
US9438878B2 (en) 2013-05-01 2016-09-06 Legend3D, Inc. Method of converting 2D video to 3D video using 3D object models
US8719032B1 (en) 2013-12-11 2014-05-06 Jefferson Audio Video Systems, Inc. Methods for presenting speech blocks from a plurality of audio input data streams to a user in an interface
US20160042766A1 (en) * 2014-08-06 2016-02-11 Echostar Technologies L.L.C. Custom video content
GB2553960A (en) 2015-03-13 2018-03-21 Trint Ltd Media generating and editing system
US9609307B1 (en) 2015-09-17 2017-03-28 Legend3D, Inc. Method of converting 2D video to 3D video using machine learning
US10387543B2 (en) * 2015-10-15 2019-08-20 Vkidz, Inc. Phoneme-to-grapheme mapping systems and methods
CN112653916B (en) * 2019-10-10 2023-08-29 腾讯科技(深圳)有限公司 Method and equipment for synchronously optimizing audio and video

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6174170B1 (en) * 1997-10-21 2001-01-16 Sony Corporation Display of text symbols associated with audio data reproducible from a recording disc
US6185538B1 (en) * 1997-09-12 2001-02-06 Us Philips Corporation System for editing digital video and audio information
US20010025241A1 (en) * 2000-03-06 2001-09-27 Lange Jeffrey K. Method and system for providing automated captioning for AV signals
US20010044719A1 (en) * 1999-07-02 2001-11-22 Mitsubishi Electric Research Laboratories, Inc. Method and system for recognizing, indexing, and searching acoustic signals
US6665643B1 (en) * 1998-10-07 2003-12-16 Telecom Italia Lab S.P.A. Method of and apparatus for animation, driven by an audio signal, of a synthesized model of a human face
US7085842B2 (en) * 2001-02-12 2006-08-01 Open Text Corporation Line navigation conferencing system
US7343082B2 (en) * 2001-09-12 2008-03-11 Ryshco Media Inc. Universal guide track

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0056507B1 (en) 1981-01-19 1985-06-12 Richard Welcher Bloomstein Apparatus and method for creating visual images of lip movements
GB2101795B (en) 1981-07-07 1985-09-25 Cross John Lyndon Dubbing translations of sound tracks on films
CA1270063A (en) 1985-05-14 1990-06-05 Kouji Miyao Translating apparatus
US5159668A (en) 1989-05-08 1992-10-27 Apple Computer, Inc. Method and apparatus for manipulating outlines in improving digital typeface on raster output devices
US5155805A (en) 1989-05-08 1992-10-13 Apple Computer, Inc. Method and apparatus for moving control points in displaying digital typeface on raster output devices
EP0526064B1 (en) 1991-08-02 1997-09-10 The Grass Valley Group, Inc. Video editing system operator interface for visualization and interactive control of video material
US5434678A (en) 1993-01-11 1995-07-18 Abecassis; Max Seamless transmission of non-sequential video segments
US5481296A (en) 1993-08-06 1996-01-02 International Business Machines Corporation Apparatus and method for selectively viewing video information
JP3356536B2 (en) 1994-04-13 2002-12-16 松下電器産業株式会社 Machine translation equipment
US5717468A (en) 1994-12-02 1998-02-10 International Business Machines Corporation System and method for dynamically recording and displaying comments for a video movie
JP4078677B2 (en) 1995-10-08 2008-04-23 イーサム リサーチ デヴェロップメント カンパニー オブ ザ ヘブライ ユニヴァーシティ オブ エルサレム Method for computerized automatic audiovisual dubbing of movies
JP3454396B2 (en) 1995-10-11 2003-10-06 株式会社日立製作所 Video change point detection control method, playback stop control method based thereon, and video editing system using them
US5732184A (en) 1995-10-20 1998-03-24 Digital Processing Systems, Inc. Video and audio cursor video editing system
US5880788A (en) 1996-03-25 1999-03-09 Interval Research Corporation Automated synchronization of video image sequences to new soundtracks
US6154601A (en) 1996-04-12 2000-11-28 Hitachi Denshi Kabushiki Kaisha Method for editing image information with aid of computer and editing system
US5832171A (en) 1996-06-05 1998-11-03 Juritech, Inc. System for creating video of an event with a synchronized transcript
JPH1074204A (en) 1996-06-28 1998-03-17 Toshiba Corp Machine translation method and text/translation display method
WO1998001860A1 (en) 1996-07-08 1998-01-15 Dubos Regis Audio-visual method and devices for dubbing films
US5969716A (en) 1996-08-06 1999-10-19 Interval Research Corporation Time-based media processing system
AU6313498A (en) 1997-02-26 1998-09-18 Tall Poppy Records Limited Sound synchronizing
US6134378A (en) 1997-04-06 2000-10-17 Sony Corporation Video signal processing device that facilitates editing by producing control information from detected video signal information
FR2765354B1 (en) 1997-06-25 1999-07-30 Gregoire Parcollet FILM DUBBING SYNCHRONIZATION SYSTEM
EP0899737A3 (en) 1997-08-18 1999-08-25 Tektronix, Inc. Script recognition using speech recognition
JPH11162152A (en) 1997-11-26 1999-06-18 Victor Co Of Japan Ltd Lyric display control information editing device
JPH11289512A (en) * 1998-04-03 1999-10-19 Sony Corp Editing list preparing device
US6490563B2 (en) * 1998-08-17 2002-12-03 Microsoft Corporation Proofreading with text to speech feedback

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6185538B1 (en) * 1997-09-12 2001-02-06 Us Philips Corporation System for editing digital video and audio information
US6174170B1 (en) * 1997-10-21 2001-01-16 Sony Corporation Display of text symbols associated with audio data reproducible from a recording disc
US6665643B1 (en) * 1998-10-07 2003-12-16 Telecom Italia Lab S.P.A. Method of and apparatus for animation, driven by an audio signal, of a synthesized model of a human face
US20010044719A1 (en) * 1999-07-02 2001-11-22 Mitsubishi Electric Research Laboratories, Inc. Method and system for recognizing, indexing, and searching acoustic signals
US20010025241A1 (en) * 2000-03-06 2001-09-27 Lange Jeffrey K. Method and system for providing automated captioning for AV signals
US7085842B2 (en) * 2001-02-12 2006-08-01 Open Text Corporation Line navigation conferencing system
US7343082B2 (en) * 2001-09-12 2008-03-11 Ryshco Media Inc. Universal guide track

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040117191A1 (en) * 2002-09-12 2004-06-17 Nambi Seshadri Correlating video images of lip movements with audio signals to improve speech recognition
US7587318B2 (en) * 2002-09-12 2009-09-08 Broadcom Corporation Correlating video images of lip movements with audio signals to improve speech recognition
US7552352B2 (en) * 2002-12-04 2009-06-23 Nxp B.V. Synchronization of signals
US20060053455A1 (en) * 2002-12-04 2006-03-09 Murali Mani Synchronization of signals
GB2424534A (en) * 2005-03-24 2006-09-27 Zootech Ltd Authoring an audiovisual production with subtitles or other captions
GB2424534B (en) * 2005-03-24 2007-09-05 Zootech Ltd Authoring audiovisual content
US20080219636A1 (en) * 2005-03-24 2008-09-11 Zootech Limited Authoring Audiovisual Content
US20070011012A1 (en) * 2005-07-11 2007-01-11 Steve Yurick Method, system, and apparatus for facilitating captioning of multi-media content
US8060591B1 (en) 2005-09-01 2011-11-15 Sprint Spectrum L.P. Automatic delivery of alerts including static and dynamic portions
US7904119B2 (en) 2005-09-28 2011-03-08 Sprint Spectrum L.P. Automatic rotation through play out of audio-clips in repsonse to detected alert events
US20100069105A1 (en) * 2005-09-28 2010-03-18 Sprint Spectrum L.P. Automatic rotation through play out of audio-clips in repsonse to detected alert events
US8106285B2 (en) 2006-02-10 2012-01-31 Harman Becker Automotive Systems Gmbh Speech-driven selection of an audio file
US7842873B2 (en) * 2006-02-10 2010-11-30 Harman Becker Automotive Systems Gmbh Speech-driven selection of an audio file
US20080065382A1 (en) * 2006-02-10 2008-03-13 Harman Becker Automotive Systems Gmbh Speech-driven selection of an audio file
US20110035217A1 (en) * 2006-02-10 2011-02-10 Harman International Industries, Incorporated Speech-driven selection of an audio file
US8713191B1 (en) 2006-11-20 2014-04-29 Sprint Spectrum L.P. Method and apparatus for establishing a media clip
US7747290B1 (en) 2007-01-22 2010-06-29 Sprint Spectrum L.P. Method and system for demarcating a portion of a media file as a ringtone
US8751022B2 (en) 2007-04-14 2014-06-10 Apple Inc. Multi-take compositing of digital media assets
US20080263450A1 (en) * 2007-04-14 2008-10-23 James Jacob Hodges System and method to conform separately edited sequences
US20080255687A1 (en) * 2007-04-14 2008-10-16 Aaron Eppolito Multi-Take Compositing of Digital Media Assets
US20080256136A1 (en) * 2007-04-14 2008-10-16 Jerremy Holland Techniques and tools for managing attributes of media content
US20080263433A1 (en) * 2007-04-14 2008-10-23 Aaron Eppolito Multiple version merge for media production
US20080295040A1 (en) * 2007-05-24 2008-11-27 Microsoft Corporation Closed captions for real time communication
US8452593B2 (en) * 2007-05-30 2013-05-28 Delta Electronics, Inc. Projection apparatus with speech indication and control method thereof
US20080300883A1 (en) * 2007-05-30 2008-12-04 Delta Electronics, Inc. Projection Apparatus with Speech Indication and Control Method Thereof
US20130150990A1 (en) * 2011-12-12 2013-06-13 Inkling Systems, Inc. Media outline
US9280905B2 (en) * 2011-12-12 2016-03-08 Inkling Systems, Inc. Media outline
US9596386B2 (en) 2012-07-24 2017-03-14 Oladas, Inc. Media synchronization
US20190096407A1 (en) * 2017-09-28 2019-03-28 The Royal National Theatre Caption delivery system
US10726842B2 (en) * 2017-09-28 2020-07-28 The Royal National Theatre Caption delivery system
US11545134B1 (en) * 2019-12-10 2023-01-03 Amazon Technologies, Inc. Multilingual speech translation with adaptive speech synthesis and adaptive physiognomy

Also Published As

Publication number Publication date
ATE368277T1 (en) 2007-08-15
US7343082B2 (en) 2008-03-11
CA2538981A1 (en) 2003-03-20
EP1425736B1 (en) 2007-07-25
EP1425736A1 (en) 2004-06-09
US20030049015A1 (en) 2003-03-13
WO2003023765A1 (en) 2003-03-20
DE60221408D1 (en) 2007-09-06
CA2538981C (en) 2011-07-26

Similar Documents

Publication Publication Date Title
CA2538981C (en) Method and device for processing audiovisual data using speech recognition
Romero-Fresco Subtitling through speech recognition: Respeaking
Gambier The position of audiovisual translation studies
US6185538B1 (en) System for editing digital video and audio information
CN104246750B (en) Make a copy of voice
TWI269245B (en) Video based language learning system
CN110622240B (en) Voice guide generation device, voice guide generation method, and broadcasting system
CN111538851B (en) Method, system, equipment and storage medium for automatically generating demonstration video
US20130047059A1 (en) Transcript editor
US20070011012A1 (en) Method, system, and apparatus for facilitating captioning of multi-media content
JP2003517738A (en) Apparatus and method for using voice recognition and dialogue to capture and play synchronized audio and video
NZ532174A (en) Voice over synchronisation
Spiteri Miggiani Exploring applied strategies for English-language dubbing
GB2101795A (en) Dubbing translating of soundtracks on films
JP4210723B2 (en) Automatic caption program production system
JP2003223199A (en) Preparation support system for writing-up text for superimposed character and semiautomatic superimposed character program production system
JP3816901B2 (en) Stream data editing method, editing system, and program
WO1998038642A1 (en) Sound synchronizing
KR102463283B1 (en) automatic translation system of video contents for hearing-impaired and non-disabled
CN113973229B (en) Online editing method for processing mouth errors in video
Żurek Audiovisual translation as a genre: its evolution and perspectives
Babić The dubbing of animated films in Croatia
US11947924B2 (en) Providing translated subtitle for video content
Minutella Audio description software tools
Matamala Audio subtitling

Legal Events

Date Code Title Description
AS Assignment

Owner name: RYSHCO MEDIA INC., CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:COTE, JOCELYNE;RYSHPAN, HOWARD;REEL/FRAME:014895/0958

Effective date: 20040705

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION