CN104199885A - Device and method for acquiring relevant content of video files - Google Patents

Device and method for acquiring relevant content of video files Download PDF

Info

Publication number
CN104199885A
CN104199885A CN201410418639.1A CN201410418639A CN104199885A CN 104199885 A CN104199885 A CN 104199885A CN 201410418639 A CN201410418639 A CN 201410418639A CN 104199885 A CN104199885 A CN 104199885A
Authority
CN
China
Prior art keywords
content association
video file
content
address
associated audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410418639.1A
Other languages
Chinese (zh)
Inventor
梁嘉燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Guangzhou Mobile R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Guangzhou Mobile R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Guangzhou Mobile R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Guangzhou Mobile R&D Center
Priority to CN201410418639.1A priority Critical patent/CN104199885A/en
Publication of CN104199885A publication Critical patent/CN104199885A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Abstract

The invention provides a device and method for acquiring relevant content of video files. The device comprises a relevant content determining unit, an address acquiring unit and an interface providing unit, wherein the relevant content determining unit is used for determining the relevant content of a video file which is being played; the address acquiring unit is used for acquiring the address of the relevant content; the interface providing unit is used for providing an interface used for acquiring the relevant content from the address. According to the device and method, the relevant content of the video file which is being played can be acquired easily and conveniently, the requirement of the user for acquiring the relevant content can be met, effectiveness of interaction with the user is enhanced, and user experience is improved.

Description

Be used for equipment and the method for the content association that obtains video file
Technical field
All things considered of the present invention relates to content obtaining technology, more particularly, relates to a kind of for obtaining equipment and the method for content association of video file.
Background technology
Along with the develop rapidly of network technology and the widespread use of multimedia technology, increasing electronic equipment possesses the function, particularly playing network video file of playing video file.At present, (for example obtain the related content of video file, for example, in the audio frequency (, Presence of the Moment, sheet caudal flexure, interlude) of playing, the video file of the playing object (for example, character costume, stage property) that shows etc.) implementation procedure very complicated.For example, for example, if user (wants to obtain, obtain by the mode of downloading) interlude in video file or the picture of certain character costume, need at least to carry out following steps: first, user will enter search interface, then, under corresponding search interface, interlude or picture is searched for, finally, manually download relevant interlude or picture from the result searching.The process of the above-mentioned related content of obtaining video file is very loaded down with trivial details and consuming time, has brought very large inconvenience to user.And, when interlude or picture are searched for, being difficult to find suitable search condition, search effect is unsatisfactory.
As can be seen here, there is many drawbacks in the mode of the existing related content of obtaining video file, cannot meet user and obtain the demand of the related content of video file.
Summary of the invention
It is a kind of for obtaining equipment and the method for content association of video file that exemplary embodiment of the present invention is to provide.
According to an aspect of the present invention, provide a kind of equipment of the content association for downloading video files, comprising: content association determining unit, for determining the content association of the video file of playing; Address acquisition unit, for obtaining the address of content association; Interface provides unit, for being provided for from the interface of described address acquisition content association.
Described equipment can also comprise: acquiring unit, the operation of described interface being carried out for basis is from described address acquisition content association.
In described equipment, content association determining unit can comprise at least one in following: content analysis unit, for the content of analyzing the video file of playing to determine content association; Document analysis unit, for the extend information of resolving video file to determine content association; Content designating unit, for carrying out specified associations content according to user's input.
In described equipment, content association can comprise at least one in following: associated audio, the associated object that shows.
In described equipment, content analysis unit can be analyzed the volume of the video file of playing, and size based on volume is determined associated audio, or, content analysis unit can be analyzed the object showing in the video file of playing, and object-based external appearance characteristic is determined the associated object that shows.
In described equipment, extend information can be resolved to determine the breakpoint that obtains arranging in extend information in document analysis unit, and the content corresponding with obtaining breakpoint is defined as to content association.
In described equipment, user's input can comprise the instruction of selecting content association for the broadcast interface from video file, and content designating unit can be defined as content association by the content of selecting according to described instruction.
In described equipment, address acquisition unit can also be obtained the descriptor about content association.
In described equipment, the address that address acquisition unit can be obtained content association with the corresponding extend information of content association from video file.
In described equipment, address acquisition unit can comprise: search condition determining unit, for determining the search condition about content association; Search unit, for utilizing definite search condition to carry out search; Address determining unit, determines described address for the result from search.
In described equipment, search condition determining unit can be based on determining the search condition about content association with the corresponding extend information of content association in video file.
In described equipment, content association can comprise associated audio, and search condition can comprise at least one in following: the melody of the theme of associated audio, the title of associated audio, associated audio, the lyrics of associated audio.
In described equipment, search condition determining unit can the play position of analyzing and associating audio frequency in video file, and title based on play position and video file is determined the theme of associated audio, or, search condition determining unit can be based on determine associated audio in video file with the corresponding extend information of content association title, or, search condition determining unit can be determined by the video file extraction melody from playing the melody of associated audio, or, search condition determining unit can be determined by extract the lyrics from the subtitle file of video file the lyrics of associated audio, or, search condition determining unit can be determined by the video file of playing is carried out to speech recognition the lyrics of associated audio.
In described equipment, content association can comprise the associated object that shows, search condition determining unit can show association that the external appearance characteristic of object is defined as showing about association the search condition of object.
In described equipment, interface provides unit within the played duration of associated audio, to be provided for from the interface of described address acquisition associated audio.
In described equipment, interface provides unit described interface can be presented at at least one in upper/lower positions: near the position in the video file information hurdle in the broadcast interface of the position corresponding to the reproduction time of content association, video file in the playing progress bar of video file, in the broadcast interface of video file near the associated position that shows object.
In described equipment, described interface can comprise with the corresponding button in address of content association, link or menu.
In described equipment, on described button, can mark the designator that is useful on identification content association type, or interface provides unit to show for identifying the designator of content association type on button side.
In described equipment, described menu can comprise at least one address of content association and the descriptor about content association.
In described equipment, content association can comprise associated audio, and described at least one address is corresponding to the different editions of associated audio.
According to a further aspect in the invention, provide a kind of method of the content association for downloading video files, comprise the following steps: the content association of determining the video file of playing; Obtain the address of content association; Be provided for from the interface of described address acquisition content association.
Described method can also comprise: according to the operation that described interface is carried out from described address acquisition content association.
In described method, at least one in can be in the following manner determined the content association of the video file of playing: analyze content in the video file of playing to determine content association; Extend information in parsing video file is to determine content association; Carry out specified associations content according to user's input.
In described method, content association can comprise at least one in following: associated audio, the associated object that shows.
In described method, at least one in can be in the following manner analyzed content in the video file of playing to determine content association: analyze the volume of the video file of playing, and size based on volume is determined associated audio; Analyze the object showing in the video file of playing, and object-based external appearance characteristic is determined the associated object that shows.
In described method, the extend information of resolving in video file can comprise with the step of determining content association: resolve extend information to determine the breakpoint that obtains arranging in extend information, and the content corresponding with obtaining breakpoint is defined as to content association.
In described method, user's input can comprise the instruction of selecting content association for the broadcast interface from video file, and the step of carrying out specified associations content according to user's input can comprise: the content of selecting according to described instruction is defined as to content association.
In described method, the step of obtaining the address of content association can also comprise: obtain the descriptor about content association.
In described method, the step of obtaining the address of content association can comprise: the address that obtains content association with the corresponding extend information of content association from video file.
In described method, the step of obtaining the address of content association can comprise: determine the search condition about content association; Utilize definite search condition to carry out search; From the result of search, determine described address.
In described method, the step of definite search condition about content association can comprise: based on determining the search condition about content association with the corresponding extend information of content association in video file.
In described method, content association can comprise associated audio, and search condition can comprise at least one in following: the melody of the theme of associated audio, the title of associated audio, associated audio, the lyrics of associated audio.
In described method, at least one in can be in the following manner determined the search condition about content association: the play position of analyzing and associating audio frequency in video file, and title based on play position and video file is determined the theme of associated audio; Based on the title of determining associated audio in video file with the corresponding extend information of content association; Determine the melody of associated audio by the video file extraction melody from playing; Determine the lyrics of associated audio by extract the lyrics from the subtitle file of video file; By being carried out to speech recognition, the video file of playing determines the lyrics of associated audio.
In described method, content association can comprise the associated object that shows, the step of definite search condition about content association can comprise: association is shown to the external appearance characteristic of object is defined as showing about association the search condition of object.
In described method, be provided for can comprising from the step of the interface of described address acquisition content association: within the played duration of associated audio, be provided for from the interface of described address acquisition associated audio.
In described method, be provided for can comprising from the step of the interface of described address acquisition content association: by for being presented at least one with upper/lower positions from the interface of described address acquisition content association: near the position in the video file information hurdle in the broadcast interface of the position corresponding to the reproduction time of content association, video file the playing progress bar of video file, in the broadcast interface of video file near the associated position that shows object.
In described method, described interface can comprise with the corresponding button in address of content association, link or menu.
In described method, be provided for can also comprising from the step of the interface of described address acquisition content association: at described button mark for identifying the designator of content association type, or, show the designator for identifying content association type on described button side.
In described method, described menu can comprise at least one address of content association and the descriptor about content association.
In described method, content association can comprise associated audio, and described at least one address is corresponding to the different editions of associated audio.
In the equipment and method for the content association of downloading video files according to an exemplary embodiment of the present invention, contribute to obtain easily the content association of the video file of playing, not only can meet user to obtaining the demand of content association, and strengthened and the validity of user interactions, improve user's experience.
Brief description of the drawings
By the description of exemplary embodiment of the present invention being carried out below in conjunction with accompanying drawing, above and other object of the present invention and feature will become apparent, wherein:
Fig. 1 illustrates according to an exemplary embodiment of the present invention the block diagram of the equipment of the content association for obtaining video file;
Fig. 2 illustrate according to the present invention another exemplary embodiment for obtaining the block diagram of equipment of content association of video file;
Fig. 3 illustrates according to an exemplary embodiment of the present invention the block diagram of the structure of the content association determining unit of the equipment of the content association for obtaining video file;
Fig. 4 illustrates according to an exemplary embodiment of the present invention the block diagram of the structure of the address acquisition unit of the equipment of the content association for obtaining video file;
Fig. 5 illustrates according to an exemplary embodiment of the present invention the process flow diagram of the method for the content association for obtaining video file;
Fig. 6 illustrate according to the present invention another exemplary embodiment for obtaining the process flow diagram of method of content association of video file;
Fig. 7 illustrates according to an exemplary embodiment of the present invention the process flow diagram of the step for determining associated audio;
Fig. 8 illustrates according to an exemplary embodiment of the present invention the process flow diagram of the step of the address for obtaining content association;
Fig. 9 illustrates according to an exemplary embodiment of the present invention the process flow diagram of the step of the address for obtaining associated audio;
Figure 10 illustrate according to the present invention another exemplary embodiment for obtaining the process flow diagram of step of address of associated audio;
Figure 11 illustrate according to the present invention another exemplary embodiment for obtaining the process flow diagram of step of address of associated audio;
Figure 12 illustrate according to the present invention another exemplary embodiment for obtaining the process flow diagram of step of address of associated audio;
Figure 13 illustrate according to the present invention another exemplary embodiment for obtaining the process flow diagram of step of address of associated audio;
Figure 14 illustrates the schematic diagram of the interface of associated audio according to an exemplary embodiment of the present invention;
Figure 15 illustrates the schematic diagram of the interface of the associated audio of another exemplary embodiment according to the present invention;
Figure 16 illustrates the schematic diagram of the interface of the associated audio of another exemplary embodiment according to the present invention;
Figure 17 illustrates the schematic diagram of the interface of the associated audio of another exemplary embodiment according to the present invention;
Figure 18 illustrates the schematic diagram of the interface of the associated audio of another exemplary embodiment according to the present invention.
Embodiment
Now, by describing embodiments of the invention in detail with reference to accompanying drawing, the example of described embodiment is shown in the drawings, and wherein, identical label refers to identical parts all the time.
Fig. 1 illustrates according to an exemplary embodiment of the present invention the block diagram of the equipment of the content association for obtaining video file.
As shown in Figure 1, comprise for the equipment (following, be called content association and obtain equipment) of the content association that obtains video file according to an exemplary embodiment of the present invention: content association determining unit 100, address acquisition unit 200 and interface provide unit 300.These unit can be realized by the common hardware such as digital signal processor, field programmable gate array processor, also can realize by dedicated hardware processors such as special chips, also can completely come to realize with software mode by computer program, for example, be implemented as the modules that is arranged on the multimedia player for playing video file in terminal.
Particularly, content association determining unit 100 is for determining the content association of the video file of playing.Here, content association refers to that user may wish the content associated with video file of obtaining.As example, content association can comprise at least one in following: associated audio (for example, Presence of the Moment, sheet caudal flexure, interlude), the associated object (for example, character costume, stage property) that shows.In addition, it should be appreciated by those skilled in the art that content association also can comprise other guide, for example, the personal information of the protagonist of the video file of playing etc.
As example, in the time that video file is played, the content that content association determining unit 100 can be play by analysis is determined content association, also can determine content association by resolving video file.In addition the content that, content association determining unit 100 also can directly be specified user when video file is played is defined as content association.
Address acquisition unit 200 is for obtaining the address of content association.Here, as example, described address can be network download address or the local memory address of content association, can be also other addresses that can obtain content association.
As example, after content association determining unit 100 has been determined the content association of video file, address acquisition unit 200 can be identified for searching for the search condition of above-mentioned content association, and utilizes search condition search to obtain the address for obtaining content association.In addition, address acquisition unit 200 also can be obtained from video file the address of content association, and as example, the address of described content association for example can be stored in, in the additional information (, extend information) of video file.
Alternatively, address acquisition unit 200 also can be obtained the descriptor about content association.For example, in the time utilizing search condition to search for, address acquisition unit 200 also can be obtained the associated description information for the content association of each address, for example, when content association be associated audio (for example, the Presence of the Moment of video file) time, address acquisition unit 200, in obtaining the address of Presence of the Moment, also can be obtained for example, descriptor such as the title of Presence of the Moment, player, performance version (accompaniment version, concert version etc.) etc.
Interface provides unit 300 for being provided for from the interface of address acquisition content association.Here, described interface can be and the corresponding interface in address of content association, that is, by the operation that described interface is scheduled to, just can obtain content association by corresponding content association address.
As example, after address acquisition unit 200 has been obtained the address of content association, interface provides unit 300 can produce and the corresponding interface in address (for example, button, link or menu etc.) of content association, and in the broadcast interface of video file, shows described interface.In this case, for example, in the time that user carries out predetermined operation (, button click) to the interface showing, can obtain content association by corresponding content association address.
By the way, contribute to obtain easily the content association of the video file of playing, not only can meet user to obtaining the demand of content association, and strengthened and the validity of user interactions, improved user's experience.
Fig. 2 illustrate according to the present invention another exemplary embodiment for obtaining the block diagram of equipment of content association of video file.Except the content association determining unit 100 shown in Fig. 1, address acquisition unit 200 and interface provide unit 300, the equipment for the content association that obtains video file shown in Fig. 2 also can comprise acquiring unit 400.
Particularly, content association determining unit 100 is for determining the content association of the video file of playing.Address acquisition unit 200 is for obtaining the address of content association.Interface provides unit 300 for being provided for from the interface of described address acquisition content association.In addition, content association determining unit 100, address acquisition unit 200 and interface provide unit 300 to operate according to the embodiment of describing with reference to Fig. 1, will no longer repeat at this.
Acquiring unit 400 is for the address acquisition content association from content association according to the operation that the interface of content association is carried out.
As example, provide unit 300 to provide the corresponding interface in address with content association at interface, the operation that acquiring unit 400 can be carried out according to user's docking port is from corresponding address acquisition content association.Here, for multi-form interface, acquiring unit 400 can sense the associative operation (for example, button click, clickthrough or choice menus project etc.) that user carries out in order to obtain content association, and obtains content association from corresponding address thus.As example, acquiring unit 400 can be downloaded content association from corresponding network download address, also can extract content association from local memory address.For example, the Presence of the Moment that is the video file play at content association, in the time that acquiring unit 400 detects the clicking operation that the download button as interface is carried out, acquiring unit 400 can obtain from the network source address of Presence of the Moment the audio file of Presence of the Moment.
By the way, user can obtain the content association of the video file of playing easily, and user obtains the operating process simple and fast of content association, has improved the experience that user obtains content association.
Content association according to an exemplary embodiment of the present invention described obtained the example arrangement of the content association determining unit 100 in equipment referring to Fig. 3.
Fig. 3 illustrates that content association according to an exemplary embodiment of the present invention obtains the block diagram of the structure of the content association determining unit in equipment.With reference to Fig. 3, content association determining unit 100 can comprise with at least one in lower unit according to an exemplary embodiment of the present invention: content analysis unit 101, document analysis unit 102, content designating unit 103.
Particularly, content analysis unit 101 can be used for analyzing content in the video file play to determine content association.Here, content analysis unit 101 can adopt various ways to analyze the association attributes of the content in the video file of playing, thereby determines that user may expect the related content of obtaining.
As example, content analysis unit 101 can be analyzed the volume of the video file of playing, and size based on volume is determined associated audio.Specifically, for example, in order (to obtain associated audio from the video file of playing, Presence of the Moment, sheet caudal flexure, dub in background music etc.), content analysis unit 101 can be extracted audio frequency (here from the video file of playing, for the situation of non-mixed sound accompaniment, content analysis unit 101 can the music soundtrack from sound signal be extracted audio frequency; For the situation of mixing sound accompaniment, content analysis unit 101 can be eliminated voice or other background sounds sound accompaniment and obtains audio frequency from mixing), and the relative value of the volume of definite audio frequency and total volume of video file, if described relative value is greater than predetermined value, content analysis unit 101 can be defined as associated audio by the audio frequency of extraction.Here, described predetermined value can be default value, also can be determined by user, for example, can during the broadcasting of video file, arrange or adjust described predetermined value according to user's input.
As example, content analysis unit 101 can be analyzed the object showing in the video file of playing, and object-based external appearance characteristic is determined the associated object that shows.Here, content analysis unit 101 can be analyzed the various appearance attributes (for example, color, texture, brightness, shape, motion state etc.) that show object, thereby determines that user may wish that the association of obtaining shows object.For example, content analysis unit 101 can be analyzed the dress ornament of the leading man showing in the video file of playing, and the upper garment of leading man is defined as associated demonstration object by the external appearance characteristic (color of the dress ornament of for example leading man and style) of dress ornament based on leading man.
It should be noted that as example, content analysis unit 101 can be analyzed the content in the video file of playing in the corresponding way according to the condition of user's input, thereby determines content association.Particularly, user can input type (for example, audio frequency or demonstration object), the key feature (for example, interlude, leading man dress ornament) of content association etc. of content association.For example, if the condition of user's input is all interludes, content analysis unit 101 is analyzed the interlude audio frequency in the video file of playing, and suitable interlude audio frequency is defined as to content association.
In addition, document analysis unit 102 can be used for resolving extend information in video file to determine content association.Here, extend information can be the additional information that video file supplier adds in the respective extension byte of the video file of various forms.Here,, taking the video file of MP4 form as example, extend information is stored in the data object piece box of user's expansion type.Particularly, video file supplier can add the information of instruction content association in extend information, correspondingly, document analysis unit 102 can extract by resolving extend information the information of instruction content association, and determines corresponding content association according to the information of extracting.As example, the information of described instruction content association can be the information of the play position of instruction content association.For example, document analysis unit 102 can obtain the play position (for example, the zero hour of interlude) of content association by resolving extend information, and in the time that video file is played to described play position, obtains corresponding interlude as content association.
In addition, content designating unit 103 can be used for carrying out specified associations content according to user's input.As example, user's input can comprise the instruction of selecting content association for the broadcast interface from video file.Correspondingly, content designating unit 103 can be defined as content association by the content of selecting according to described instruction.For this reason, as example, can on the broadcast interface of video file, be provided for making the control that user can specified associations content, also can specific physical button be set to be used to refer to and determine downloading contents by user, in addition, also can be only by user's specific operation action (as, rock the equipment of playing video file) be set to be used to specify current downloading contents.For example, can be provided for downloading icon or the button of current background music in the broadcast interface of video file, in the time that user clicks described icon or button, content designating unit 103 can be defined as content association by current background music.
By the content association determining unit 100 shown in Fig. 3, can effectively determine various content association from the video file of playing, thereby meet user and obtain the demand of different content association.
Here, should be appreciated that, in above-mentioned exemplary embodiment, content association determining unit 100 also nonessentially comprises content analysis unit 101, document analysis unit 102 and content designating unit 103 threes, and can only comprise one of them or two unit.For example, if there is not any related expanding information that is used to specify content association in video file, content association determining unit 100 can not comprise document analysis unit 102.In addition,, even if content association determining unit 100 comprises content analysis unit 101, document analysis unit 102 and content designating unit 103 threes, also can according to circumstances only enable one or two unit in three.
According to exemplary embodiment of the present invention, address acquisition unit 200 can be used for obtaining the address of content association.Here, the address that address acquisition unit 200 can directly be obtained content association (for example, address acquisition unit 200 can be from video file and the corresponding extend information reading address of content association), also can obtain by mode indirectly the address (for example, address acquisition unit 200 can by carrying out for content association the address that content association is obtained in search) of content association.
Content association according to an exemplary embodiment of the present invention described obtained the example arrangement of the address acquisition unit 200 in equipment referring to Fig. 4.Here, described address acquisition unit 200 for by for content association carry out search obtain content association address.
Fig. 4 illustrates that content association according to an exemplary embodiment of the present invention obtains the block diagram of the structure of the address acquisition unit in equipment.With reference to Fig. 4, address acquisition unit 200 can comprise according to an exemplary embodiment of the present invention: search condition determining unit 201, search unit 202 and address determining unit 203.
Particularly, search condition determining unit 201 can be used for determining the search condition about content association.Here, search condition can be used for searching for content association, that is to say, by utilizing search condition at the corresponding enterprising line search of search engine, likely searches content association.
As example, search condition determining unit 201 can be obtained from video file the search condition of content association, and for example, search condition determining unit 201 can be based on determining the search condition about content association with the corresponding extend information of content association in video file.Here, about the information of the search condition of content association (for example can comprise with the corresponding extend information of content association, associated audio (for example, Presence of the Moment, sheet caudal flexure, interlude) descriptor, the associated descriptor (for example, the characteristic of character costume and stage property) that shows object etc. of title, associated audio).For example, suppose the title that described extend information comprises the audio frequency (such as Presence of the Moment, sheet caudal flexure, interlude etc.) in video file, after content association determining unit 100 is determined associated audio, the title that search condition determining unit 201 can be obtained associated audio from extend information is using as search condition.For example, if associated audio is Presence of the Moment, the name that search condition determining unit 201 is extracted Presence of the Moment from extend information is referred to as the title (, search condition) of associated audio.For example, comprise the title (for example, " words of kokeshi ") of the Presence of the Moment of video file in extend information, search condition determining unit 201 is extracted " words of kokeshi " as search condition from extend information.
As another example, search condition determining unit 201 can be used as corresponding search condition by the relevant information of content association or characteristic.
As example, in the situation that content association is associated audio, search condition can comprise at least one in following: the melody of the theme of associated audio, the title of associated audio, associated audio, the lyrics of associated audio.Correspondingly, as example, search condition determining unit 201 can be determined search condition by least one in following four kinds of modes.
It is search condition that mode one is intended to the subject determination of associated audio, in this case, search condition determining unit 201 can the play position of analyzing and associating audio frequency in video file, and title based on play position and video file is determined the theme of associated audio.Wherein, can from video file, obtain the title of video file.The information about play position can be extracted by variety of way in play position instruction associated audio residing position in the playing progress bar of video file from video file.Search condition determining unit 201 will be determined the theme of associated audio according to the title of play position and video file.For example, the name of supposing video file is called lock Liancheng, palace, if the play position of associated audio is positioned at the initiating terminal of the playing progress bar of video file, show that this associated audio is corresponding Presence of the Moment, correspondingly, search condition determining unit 201 can be determined theme as " lock Liancheng, palace Presence of the Moment " of associated audio, and is search condition by above-mentioned subject determination.
Mode two is intended to the melody of associated audio to be defined as search condition, and in this case, search condition determining unit 201 can be determined by the video file extraction melody from playing the melody of associated audio.As example, search condition determining unit 201 can from the video file play extract associated audio certain fragment melody (for example, described melody can be expressed as melody characteristics, melody file, melody parameter etc.), and described melody is defined as to search condition.For example, if associated audio is Presence of the Moment, search condition determining unit 201 can be extracted from the video file of playing certain section of melody of Presence of the Moment, and described melody is defined as to search condition.
Mode three is intended to the lyrics of associated audio to be defined as search condition, and in this case, search condition determining unit 201 can be determined by extract the lyrics from the subtitle file of video file the lyrics of associated audio.Here, subtitle file can comprise all related text contents (word content, the personage that for example, audio frequency is corresponding talks with corresponding word content, word content that aside is corresponding etc.) of the video file of playing.As example, search condition determining unit 201 can be extracted word content corresponding to associated audio (, the lyrics) from subtitle file, and the described lyrics is defined as to the search condition of associated audio.For example, if associated audio is Presence of the Moment, search condition determining unit 201 can be extracted the lyrics of Presence of the Moment from subtitle file, and the described lyrics are defined as to search condition.
Mode four is intended to the lyrics of associated audio to be defined as search condition equally, and in this case, search condition determining unit 201 can be determined by the video file of playing is carried out to speech recognition the lyrics of associated audio.As example, search condition determining unit 201 is determined word content corresponding to associated audio (for example, the lyrics) by associated audio being carried out to speech recognition, and the described lyrics are defined as to search condition.For example, if associated audio is Presence of the Moment, search condition determining unit 201 can be carried out speech recognition to obtain the word content that Presence of the Moment is corresponding (, the lyrics) to Presence of the Moment, and the described lyrics are defined as to search condition.
Here, should be appreciated that, for the variety of way use capable of being combined of determining above search condition for the relevant information based on associated audio or characteristic, that is to say, can adopt any one or a few in aforesaid way to combine to realize the definite of search condition.In addition, the relevant information based on associated audio or characteristic determine that the mode of search condition also can combine with the mode of directly obtaining search condition, more effectively to determine search condition.
As another example, in the situation that content association is associated demonstration object, search condition determining unit 201 can show association that the external appearance characteristic of object is defined as showing about association the search condition of object.For example, search condition determining unit 201 can show association that the various appearance attributes (for example, color, texture, brightness, shape, motion state etc.) of object are defined as showing about association the search condition of object.
In addition, search unit 202 can be used for utilizing definite search condition to carry out search.Alternatively, after search condition determining unit 201 has been determined search condition, search unit 202 can utilize described search condition to carry out search, thereby obtains Search Results.As example, search unit 202 can utilize described search condition to search in various search engines, website or local storage file, to obtain Search Results.
In addition, address determining unit 203 can be used for determining the address of content association from the result of search.Particularly, when search unit 202 utilizes search condition to carry out search, can obtain at least one Search Results.In the time that search unit 202 obtains single Search Results, address determining unit 203 can be directly will be defined as the address of content association with the corresponding address of described single Search Results.In the time that search unit 202 obtains multiple Search Results, address determining unit 203 can filter out the address of content association from described multiple Search Results.For example, address determining unit 203 can will be defined as the address of content association with the most forward corresponding address of result of search rank.Preferably, address determining unit 203 can first be verified the validity of the address in Search Results, the address by checking (can be one or more addresses by the address of checking) be defined as to the address of content association here.For example, in the time utilizing the search condition of associated audio to search multiple address in music searching engine, address determining unit 203 can first be verified the validity of the address searching, and will be defined as the address of Presence of the Moment by the address of checking.Here, alternatively, address determining unit 203 also can be determined the address of content association in other way from the result of search, for example, address determining unit 203 can be defined as the address that meets predetermined condition (predetermined condition can be preset by user) in the result of search here, the address of content association.In addition,, in the time that the address of the definite content association of address determining unit 203 is individual address, acquiring unit 400 can obtain content association from described individual address; In the time that the address of the definite content association of address determining unit 203 is multiple address, acquiring unit 400 can start from first address to attempt obtaining content association, if can get content association, no longer carry out subsequent operation, otherwise will again attempt obtaining content association since second address, until get content association.
By the address acquisition unit 200 shown in Fig. 4, can effectively obtain corresponding address for various content association, and not need to carry out any manual operation.In addition, the combination in any of different obtain manners can further expand the scope of obtaining address.
Below, in connection with concrete example to providing unit 300 to be described below in greater detail and to illustrate according to the interface of exemplary embodiment of the present invention.Below respectively from position, the form of interface with provide time three aspects to carry out docking port to provide unit 300 to be elaborated.
Alternatively, aspect the position of interface, the interface of content association can show in the precalculated position of the broadcast interface of the video file of playing.As example, interface provides unit 300 interface of content association can be presented at at least one in upper/lower positions: near the position in the video file information hurdle in the broadcast interface of the position corresponding to the reproduction time of content association, video file in the playing progress bar of video file, in the broadcast interface of video file near the associated position that shows object.
Alternatively, aspect the form of interface, the interface of content association can comprise with the corresponding button in address of content association, link or menu.
In one example, when the interface of content association is during with the address corresponding button of content association, interface provides the above-mentioned any position in the broadcast interface that unit 300 can be presented at described button video file.In addition, interface provides unit 300 also can on button, mark the designator for identifying content association type.Or interface provides unit 300 also can show for identifying the designator of content association type on button side.
For example, when the interface of content association is during with the address corresponding button of content association, acquiring unit 400 can directly obtain content association after to the selection operation of described button receiving user.Or interface provides unit 300 receiving menu corresponding to selection operation rear demonstration described button of user to described button, and the selection that acquiring unit 400 is carried out the content in described menu according to user operates to obtain corresponding content association.Here, described menu can comprise at least one address of content association and the descriptor about described content association.For example, in the situation that content association is associated audio, described at least one address for example, corresponding to the different editions (, different tonequality, different performing artists etc.) of associated audio.Described menu comprise associated audio at least one version address and about the descriptor of associated audio.
In another example, when the interface of content association is while linking accordingly with the address of content association, interface provides the above-mentioned any position in the broadcast interface that unit 300 can be presented at described link video file, so that corresponding content association is obtained in the operation that acquiring unit 400 is carried out described link according to user.In addition, interface provides unit 300 to show for identifying the designator of content association type on described link side.Or interface provides unit 300 can show the float icon of the designator for identifying content association type on link.
In another example, when the interface of content association is during with the address corresponding menu of content association, interface provides the above-mentioned any position in the broadcast interface that unit 300 can directly be presented at described menu video file, so that the selection that acquiring unit 400 is carried out the content in menu according to user operates to obtain corresponding content association.Here, described menu can comprise at least one address of content association and the descriptor about described content association.For example, in the situation that content association is associated audio, described at least one address for example, corresponding to the different editions (, different tonequality, different performing artists etc.) of associated audio.Described menu comprise associated audio at least one version address and about the descriptor of associated audio.
Alternatively, providing aspect the time of interface, as example, in the situation that content association is associated audio, interface provides unit 300 within the played duration of associated audio, to be provided for from the interface of the address acquisition associated audio of associated audio.For example, in the time of Presence of the Moment that content association is video file, interface provides unit 300 can in the reproduction time of Presence of the Moment, be provided for from the interface of the address acquisition Presence of the Moment of Presence of the Moment.As another example, in the situation that content association is associated demonstration object, interface provides unit 300 can within association shows the shown duration of object, be provided for showing from association the associated interface that shows object of address acquisition of object.Here, should be appreciated that, the interface of content association also can be provided in other applicable times, for example, the interface that interface provides unit 300 can show content association in the playing process of video file always, or, the interface of content association in the schedule time in the playing process of video file, shown.
Provide providing of interface that unit 300 can realize content association by above-mentioned interface, for the address acquisition content association by content association is provided convenience, thereby can meet user and obtain the demand of content association.
The method (following, to be called content association acquisition methods) of the content association for obtaining video file is described according to an exemplary embodiment of the present invention to Figure 18 in connection with Fig. 5 below.Described method can be carried out to the equipment shown in Fig. 4 by Fig. 1, also can realize by computer program.For example, described method can be carried out by the multimedia player for playing video file being arranged in terminal.
Fig. 5 illustrates the process flow diagram of content association acquisition methods according to an exemplary embodiment of the present invention.
At step S100, determine the content association of the video file of playing.Here, content association refers to that user may wish the content associated with video file of obtaining.As example, content association can comprise at least one in following: associated audio (for example, Presence of the Moment, sheet caudal flexure, interlude), the associated object (for example, character costume, stage property) that shows.In addition, it should be appreciated by those skilled in the art that content association also can comprise other guide, for example, the personal information of the protagonist of the video file of playing etc.
As example, in the time that video file is played, at step S100, the content that can play by analysis is determined content association, also can determine content association by resolving video file.In addition the content that also user directly can be specified when video file is played, is defined as content association.
At step S200, obtain the address of content association.Here, as example, described address can be network download address or the local memory address of content association, can be also other addresses that can obtain content association.
As example, after step S100 has determined the content association of video file, at step S200, can be identified for searching for the search condition of above-mentioned content association, and utilize search condition search to obtain the address for obtaining content association.In addition, at step S200, also can obtain from video file the address of content association, as example, the address of described content association for example can be stored in, in the additional information (, extend information) of video file.
Alternatively, at step S200, also can obtain the descriptor about content association.For example, in the time utilizing search condition to search for, also can obtain the associated description information for the content association of each address, for example, when content association be associated audio (for example, the Presence of the Moment of video file) time, in obtaining the address of Presence of the Moment, also can obtain for example, descriptor such as the title of Presence of the Moment, player, performance version (accompaniment version, concert version etc.) etc.
At step S300, be provided for from the interface of the address acquisition content association of content association.Here, described interface can be and the corresponding interface in address of content association, that is, by the operation that described interface is scheduled to, just can obtain content association by corresponding content association address.
As example, after step S200 has obtained the address of content association, at step S300, can produce with the corresponding interface in address of content association (for example, button, link or menu etc.), and in the broadcast interface of video file, show described interface.In this case, for example, in the time that user carries out predetermined operation (, button click) to the interface showing, can obtain content association by corresponding content association address.
By the way, contribute to obtain easily the content association of the video file of playing, not only can meet user to obtaining the demand of content association, and strengthened and the validity of user interactions, improved user's experience.
Fig. 6 illustrates the process flow diagram of the content association acquisition methods of another exemplary embodiment according to the present invention.
At step S100, determine the content association of the video file of playing.
At step S200, obtain the address of content association.
At step S300, be provided for from the interface of the address acquisition content association of content association.
Should be appreciated that, above-mentioned steps S100, step S200 and step S300 can comprise the concrete operations of carrying out in step S100, the step S200 of Fig. 5 and S300, for avoiding repetition, will repeat no more here.
At step S400, the operation of carrying out according to the interface to content association is from the address acquisition content association of content association.
As example, after step S300 provides the corresponding interface in address with content association, at step S400, the operation that can carry out according to user's docking port is from corresponding address acquisition content association.Here,, for multi-form interface, can sense the associative operation (for example, button click, clickthrough or choice menus project etc.) that user carries out in order to obtain content association, and obtain content association from corresponding address thus.As example, at step S400, can download content association from corresponding network download address, also can extract content association from local memory address.For example, the Presence of the Moment that is the video file play at content association, in the time the clicking operation that user carries out the download button as interface being detected, can obtain from the network source address of Presence of the Moment the audio file of Presence of the Moment.
By the way, user can obtain the content association of the video file of playing easily, and user obtains the operating process simple and fast of content association, has improved the experience that user obtains content association.
The flow process of step S100 in content association acquisition methods is according to an exemplary embodiment of the present invention described below in conjunction with concrete example.
Alternatively, at least one in can be in the following manner determined the content association of the video file of playing: analyze content in the video file of playing to determine content association (hereinafter to be referred as mode A); Extend information in parsing video file is to determine content association (hereinafter to be referred as mode B); Carry out specified associations content (hereinafter to be referred as mode C) according to user's input.
Particularly, in mode A, can adopt various ways to analyze the association attributes of the content in the video file of playing, thereby determine that user may expect the related content of obtaining.
As example, can analyze the volume of the video file of playing, and size based on volume is determined associated audio.Specifically, for example, in order (to obtain associated audio from the video file of playing, Presence of the Moment, sheet caudal flexure, dub in background music etc.), can from the video file of playing, extract audio frequency (here, for the situation of non-mixed sound accompaniment, can the music soundtrack from sound signal extract audio frequency; For the situation of mixing sound accompaniment, can eliminate voice or other background sounds sound accompaniment and obtain audio frequency from mixing), and the relative value of the volume of definite audio frequency and total volume of video file, if described relative value is greater than predetermined value, the audio frequency of extraction can be defined as to associated audio.Here, described predetermined value can be default value, also can be determined by user, for example, can during the broadcasting of video file, arrange or adjust described predetermined value according to user's input.
Here illustrate that in connection with Fig. 7 the size based on volume according to an exemplary embodiment of the present invention determines the detailed process of associated audio.With reference to Fig. 7, at step S1011, obtain the video file of playing.At step S1012, extract the audio frequency w of described video file.Here for the situation of non-mixed sound accompaniment, can the music soundtrack from sound signal extract audio frequency; For mixing the situation of sound accompaniment, can from mix sound accompaniment, eliminate voice or other background sounds and obtain audio frequency.At step S1013, determine the volume p of audio frequency w and total volume q of video file.At step S1014, the ratio of p and q and predetermined value are compared, if p/q is less than or equal to predetermined value, finish this time operation; If p/q is greater than predetermined value, at step S1015, audio frequency w is defined as to associated audio.Here, described predetermined value can be default value, after also can arranging or adjust according to user's input, obtains.
As example, the object showing in the video file that can play by analysis, and object-based external appearance characteristic is determined the associated object that shows.Here, can analyze the various appearance attributes (for example, color, texture, brightness, shape, motion state etc.) that show object, thereby determine that user may wish that the association of obtaining shows object.For example, can analyze the dress ornament of the leading man showing in the video file of playing, and the upper garment of leading man is defined as associated demonstration object by the external appearance characteristic (color of the dress ornament of for example leading man and style) of dress ornament based on leading man.
It should be noted that as example, can analyze in the corresponding way the content in the video file of playing according to the condition of user's input, thereby determine content association.Particularly, user can input type (for example, audio frequency or demonstration object), the key feature (for example, interlude, leading man dress ornament) of content association etc. of content association.For example, if the condition of user's input is all interludes, analyzes the interlude audio frequency in the video file of playing, and suitable interlude audio frequency is defined as to content association.
In addition,, in mode B, extend information can be the additional information that video file supplier adds in the respective extension byte of the video file of various forms.Here,, taking the video file of MP4 form as example, extend information is stored in the data object piece box of user's expansion type.Particularly, video file supplier can add the information of instruction content association in extend information, correspondingly, in mode B, can extract by resolving extend information the information of instruction content association, and determine corresponding content association according to the information of extracting.As example, the information of described instruction content association can be the information of the play position of instruction content association.For example, can obtain the play position (for example, the zero hour of interlude) of content association by resolving extend information, and in the time that video file is played to described play position, obtain corresponding interlude as content association.
In addition, in mode C, as example, user's input can comprise the instruction of selecting content association for the broadcast interface from video file.Correspondingly, the content of selecting according to described instruction can be defined as to content association.For this reason, as example, can on the broadcast interface of video file, be provided for making the control that user can specified associations content, also can specific physical button be set to be used to refer to and determine downloading contents by user, in addition, also can be only by user's specific operation action (as, rock the equipment of playing video file) be set to be used to specify current downloading contents.For example, in the broadcast interface of video file, can be provided for downloading icon or the button of current background music, in the time that user clicks described icon or button, current background music can be defined as to content association.
By above-mentioned steps S100, can effectively determine various content association from the video file of playing, thereby meet user and obtain the demand of different content association.
Here, further, when when two or three in aforesaid way being combined determine the content association of the video file of playing, can come to determine content association according to each mode successively according to predetermined order, also can determine content association according to each mode concurrently simultaneously.
According to exemplary embodiment of the present invention, can obtain at step S200 the address of content association.Here, (for example can directly obtain the address of content association, can be from video file and the corresponding extend information reading address of content association), also can obtain by mode indirectly the address (for example, can by carrying out for content association the address that content association is obtained in search) of content association.
The flow process of step S200 in content association acquisition methods is according to an exemplary embodiment of the present invention described referring to Fig. 8.Here, can be by carrying out for content association the address that content association is obtained in search.
Fig. 8 illustrates according to an exemplary embodiment of the present invention the process flow diagram of the step of the address for obtaining content association.
With reference to Fig. 8, at step S210, determine the search condition about content association.Here, search condition can be used for searching for content association, that is to say, by utilizing search condition at the corresponding enterprising line search of search engine, likely searches content association.
As example, at step S210, can obtain from video file the search condition of content association, for example, can be based on determining the search condition about content association with the corresponding extend information of content association in video file.Here, about the information of the search condition of content association (for example can comprise with the corresponding extend information of content association, associated audio (for example, Presence of the Moment, sheet caudal flexure, interlude) descriptor, the associated descriptor (for example, the characteristic of character costume and stage property) that shows object etc. of title, associated audio).For example, suppose the title that described extend information comprises the audio frequency (such as Presence of the Moment, sheet caudal flexure, interlude etc.) in video file, after step S100 has determined associated audio, at step S210, the title that can obtain associated audio from extend information is using as search condition.For example, if associated audio is Presence of the Moment, the name of extracting Presence of the Moment from extend information is referred to as the title (, search condition) of associated audio.For example, in extend information, comprise the title (for example, " words of kokeshi ") of the Presence of the Moment of video file, from extend information, extract " words of kokeshi " as search condition.
As another example, at step S210, the relevant information of content association or characteristic can be used as to corresponding search condition.
As example, in the situation that content association is associated demonstration object, at step S210, association can be shown to the external appearance characteristic of object is defined as showing about association the search condition of object.For example, association can be shown to the various appearance attributes (for example, color, texture, brightness, shape, motion state etc.) of object are defined as showing about association the search condition of object.
As another example, in the situation that content association is associated audio, search condition can comprise at least one in following: the melody of the theme of associated audio, the title of associated audio, associated audio, the lyrics of associated audio.
At step S220, utilize definite search condition to carry out search.Alternatively, after step S210 has determined search condition, at step S220, can utilize described search condition to carry out search, thereby obtain Search Results.As example, can in various search engines, website or local storage file, utilize described search condition to search for, to obtain Search Results.
At step S230, from the result of search, determine address.Particularly, in the time that step S220 utilizes search condition to carry out search, can obtain at least one Search Results.In the time obtaining single Search Results, at step S230, can be directly will be defined as the address of content association with the corresponding address of described single Search Results.In the time obtaining multiple Search Results, at step S230, can from described multiple Search Results, filter out the address of content association.For example, can will be defined as the address of content association with the most forward corresponding address of result of search rank.Preferably, at step S230, can first verify the validity of the address in Search Results, the address by checking (can be one or more addresses by the address of checking) be defined as to the address of content association here.For example, in the time utilizing the search condition of associated audio to search multiple address in music searching engine, can first verify the validity of the address searching, will be defined as the address of Presence of the Moment by the address of checking.Here, alternatively, at step S230, also can from the result of search, determine in other way the address of content association, for example, the address that meets predetermined condition (predetermined condition can be preset by user) in the result of search here, can be defined as to the address of content association.In addition,, in the time that the address of content association definite in step S230 is individual address, at step S400, can obtain content association from described individual address; In the time that the address of content association definite in step S230 is multiple address, at step S400, can start from first address to attempt obtaining content association, if can get content association, no longer carry out subsequent operation, otherwise will again attempt obtaining content association since second address, until get content association.
By the treatment scheme of the step S200 shown in Fig. 8, can effectively obtain corresponding address for various content association, and not need to carry out any manual operation.In addition, the combination in any of different obtain manners can further expand the scope of obtaining address.
Below, illustrate the specific implementation process of step S200 in connection with Fig. 9 to Figure 13.Here, Fig. 9 to Figure 13 is illustrated in the situation that content association is associated audio, determines the process flow diagram of the process of the address of associated audio according to search condition.
Fig. 9 illustrates according to an exemplary embodiment of the present invention the process flow diagram of the step of the address for obtaining associated audio.In the example depicted in fig. 9, the name of associated audio is referred to as to search condition, thus the address of search definite associated audio.
As shown in Figure 9, at step S2101, obtain in video file and the corresponding extend information of associated audio.Here in described extend information, comprise the title of associated audio (for example, Presence of the Moment, sheet caudal flexure, interlude).At step S2102, from extend information, obtain the title of associated audio.At step S2103, the name of associated audio is referred to as to search condition and carries out search.Here,, as example, the name of associated audio can be referred to as to search condition and search in music searching engine.At step S2104, from the result of search, determine the address of associated audio.Here,, as example, the result of search can be at least one address obtaining from music searching engine.
Figure 10 illustrate according to the present invention another exemplary embodiment for obtaining the process flow diagram of step of address of associated audio.In the example depicted in fig. 10, be search condition by the subject determination of associated audio, thus the address of search definite associated audio.
As shown in figure 10, at step S2111, the play position of analyzing and associating audio frequency in video file.Here, the information about play position can be extracted by variety of way in play position instruction associated audio residing position in the playing progress bar of video file from video file.At step S2112, obtain the title of video file.Wherein, can from video file, obtain the title of video file.At step S2113, the title based on play position and video file is determined the theme of associated audio.As example, for example, if the end of the reproduction time that the play position of associated audio is video file (, in the total length video of 45 minutes the 40 minute), show that this associated audio is corresponding sheet caudal flexure, now, the name of supposing video file is called " palace lock pearl-decorated curtain ", the theming as of associated audio " palace lock pearl-decorated curtain sheet caudal flexure ".At step S2114, utilize the theme of associated audio to carry out search.For example, " palace lock pearl-decorated curtain sheet caudal flexure " in above-mentioned example can be input in music searching engine and search for.At step S2115, from the result of search, determine the address of associated audio.Here,, as example, the result of search can be at least one address obtaining from music searching engine.
Figure 11 illustrate according to the present invention another exemplary embodiment for obtaining the process flow diagram of step of address of associated audio.In the example depicted in fig. 11, the melody of associated audio is defined as to search condition, thus the address of search definite associated audio.
As shown in figure 11, at step S2121, extract the melody of associated audio from the video file of playing.Here as example, can extract from the video file of playing the melody (for example, described melody can be expressed as melody characteristics, melody file, melody parameter etc.) of certain fragment of associated audio.At step S2122, the melody of the associated audio of extraction is determined to search condition.For example, if associated audio is Presence of the Moment, search condition determining unit 201 can be extracted from the video file of playing certain section of melody of Presence of the Moment, and described melody is defined as to search condition.At step S2123, utilize described search condition to carry out search.Here, as example, can in music searching engine, search for the melody of associated audio as search condition.At step S2124, from the result of search, determine the address of associated audio.Here,, as example, the result of search can be at least one address obtaining from music searching engine.
In addition, can the lyrics of associated audio be defined as to search condition by the mode shown in Figure 12 and Figure 13, thus the address of search definite associated audio.
Figure 12 illustrate according to the present invention another exemplary embodiment for obtaining the process flow diagram of step of address of associated audio.
As shown in figure 12, at step S2131, from the subtitle file of video file, extract the lyrics of associated audio.Here, subtitle file can comprise all related text contents (word content, the personage that for example, audio frequency is corresponding talks with corresponding word content, word content that aside is corresponding etc.) of the video file of playing.As example, can from subtitle file, extract word content corresponding to associated audio (, the lyrics).At step S2132, the lyrics of the associated audio of extraction are determined to search condition.For example, if associated audio is Presence of the Moment, can from subtitle file, extracts at step S2131 the lyrics of Presence of the Moment, and at step S2132, the described lyrics are defined as to search condition. at step S2133, utilize described search condition to carry out search.Here, as example, can in music searching engine, search for the lyrics of associated audio as search condition.At step S2135, from the result of search, determine the address of associated audio.Here,, as example, the result of search can be at least one address obtaining from music searching engine.
Figure 13 illustrate according to the present invention another exemplary embodiment for obtaining the process flow diagram of step of address of associated audio.
As shown in figure 13, at step S2141, determine word content corresponding to associated audio (for example, the lyrics) by associated audio being carried out to speech recognition.For example, if associated audio is Presence of the Moment, can carry out speech recognition to obtain the word content that Presence of the Moment is corresponding (, the lyrics) to Presence of the Moment.At step S2142, the lyrics of associated audio are defined as to search condition.At step S2143, utilize described search condition to carry out search.Here, as example, can in music searching engine, search for the lyrics of associated audio as search condition.By at step S2144, from the result of search, determine the address of associated audio.Here,, as example, the result of search can be at least one address obtaining from music searching engine.
Here, should be appreciated that, Fig. 9 can be arbitrarily combined to determine the address of content association to the different modes shown in Figure 13.
Below, in connection with concrete example to being described below in greater detail and illustrating according to the step S300 in the content association acquisition methods of exemplary embodiment of the present invention.From provide time, position and three aspects of form of interface, step S300 is elaborated respectively below.
Alternatively, providing aspect the time of interface, as example, in the situation that content association is associated audio, at step S300, can within the played duration of associated audio, be provided for from the interface of the address acquisition associated audio of associated audio.For example, in the time of Presence of the Moment that content association is video file, can in the reproduction time of Presence of the Moment, be provided for from the interface of the address acquisition Presence of the Moment of Presence of the Moment.As another example, in the situation that content association is associated demonstration object, can within showing the shown duration of object, association be provided for showing from association the associated interface that shows object of address acquisition of object.Here, should be appreciated that, the interface of content association also can be provided in other applicable times, for example, can in the playing process of video file, show the interface of content association always, or, the interface of content association in the schedule time in the playing process of video file, shown.
Alternatively, aspect the position of interface, the interface of content association can show in the precalculated position of the broadcast interface of the video file of playing.As example, at step S300, the interface of content association can be presented at at least one in upper/lower positions: near the position in the video file information hurdle in the broadcast interface of the position corresponding to the reproduction time of content association, video file in the playing progress bar of video file, in the broadcast interface of video file near the associated position that shows object.
For example, in one example, content association is associated audio, and the interface of associated audio can be presented near the position corresponding to the reproduction time of associated audio in the playing progress bar of video file.Figure 14 illustrates the schematic diagram of the interface of associated audio according to an exemplary embodiment of the present invention.As shown in figure 14, associated audio is Presence of the Moment and sheet caudal flexure, and the interface (as shown in dotted line frame) of associated audio is presented at the play position of Presence of the Moment and sheet caudal flexure.Here, the interface of associated audio can be presented at the play position corresponding to content association in the playing process of video file always.
In another example, content association is associated audio, and the interface of associated audio can be presented at the position in the video file information hurdle in the broadcast interface of video file.Figure 15 illustrates the schematic diagram of the interface of the associated audio of another exemplary embodiment according to the present invention.As shown in figure 15, associated audio is Presence of the Moment, and the interface of associated audio is presented in video file information hurdle.After the address of associated audio is acquired, the interface of associated audio shows within the played duration of Presence of the Moment, and in the time that Presence of the Moment finishes, the interface of associated audio is no longer shown.
Alternatively, aspect the form of interface, the interface of content association can comprise with the corresponding button in address of content association, link or menu.
In one example, when the interface of content association is during with the address corresponding button of content association, at step S300, described button can be presented to the above-mentioned any position in the broadcast interface of video file.In addition,, at step S300, also can on button, mark the designator for identifying content association type.Or, at step S300, also can show on button side the designator (for example, note symbol shown in Figure 14 and Figure 15 (in the situation that content association is associated audio)) for identifying content association type.
For example, when the interface of content association is during with the address corresponding button of content association, at step S400, can directly obtain content association after to the selection operation of described button receiving user.Or, at step S300, receives user and show menu corresponding to described button to the selection operation of described button is rear, and at step S400, the selection of the content in described menu being carried out according to user operates to obtain corresponding content association.Here, described menu can comprise at least one address of content association and the descriptor about described content association.For example, in the situation that content association is associated audio, described at least one address for example, corresponding to the different editions (, different tonequality, different performing artists etc.) of associated audio.Described menu comprise associated audio at least one version address and about the descriptor of associated audio.
In another example, when the interface of content association is while linking accordingly with the address of content association, at step S300, described link can be presented to the above-mentioned any position in the broadcast interface of video file, so that corresponding content association is obtained in the operation of according to user, described link being carried out subsequently in step S400.In addition,, at step S300, can show on described link side the designator for identifying content association type.Or, at step S300, can on link, show the float icon of the designator for identifying content association type.
In another example, when the interface of content association is during with the address corresponding menu of content association, at step S300, described menu directly can be presented to the above-mentioned any position in the broadcast interface of video file, so that the selection of according to user, the content in menu being carried out subsequently operates to obtain corresponding content association in step S400.Here, described menu can comprise at least one address of content association and the descriptor about described content association.For example, in the situation that content association is associated audio, described at least one address for example, corresponding to the different editions (, different tonequality, different performing artists etc.) of associated audio.Described menu comprise associated audio at least one version address and about the descriptor of associated audio.
For example, in one example, Figure 16 illustrates the schematic diagram of the interface of the associated audio of another exemplary embodiment according to the present invention.As shown in figure 16, associated audio is interlude, in this case, at step S300, can show button and the note symbol of indicating content association type as shown in figure 16, and, receive user to the selection operation of button after, can show corresponding download choice menus.Here, the each entry in described download choice menus can comprise the descriptor of singer, episode name and the interlude of interlude, so that user selects the entry of downloading in choice menus.In Figure 16, song corresponding to each entry of downloading in choice menus has identical singer, identical episode name and different descriptors.
In another example, Figure 17 illustrates the schematic diagram of the interface of the associated audio of another exemplary embodiment according to the present invention.As shown in figure 17, associated audio is interlude, in this case, at step S300, can show button and the note symbol of indicating content association type as shown in figure 17, and, receive user to the selection operation of button after, can show corresponding download choice menus.Here, the each entry in described download choice menus can comprise singer and the episode name of interlude, so that user selects the entry of downloading in choice menus.In Figure 17, song corresponding to each entry of downloading in choice menus has identical episode name and different singers.
In another example, Figure 18 illustrates the schematic diagram of the interface of the connection audio frequency of another exemplary embodiment according to the present invention.As shown in figure 18, associated audio is interlude, in this case, at step S300, can show button and the note symbol of indicating content association type as shown in figure 18, and, receive user to the selection operation of button after, can show corresponding download choice menus.Here, the each entry in described download choice menus can comprise the descriptor of singer, episode name and the interlude of interlude, so that user selects the entry of downloading in choice menus.In Figure 18, song corresponding to each entry of downloading in choice menus has identical singer, identical episode name, identical descriptor and different tonequality version.In the time that the entry in download choice menus is selected, can show corresponding download chooser menu, here, each entry in described download chooser menu can comprise the different editions of interlude, as shown in figure 18, the corresponding download chooser of the first entry menu of downloading in choice menus comprises corresponding standard tonequality version and smooth tonequality version.
The providing of interface of content association can be provided by above-mentioned steps S300, for the address acquisition content association by content association is provided convenience, thereby can meet user and obtain the demand of content association.
In sum, according to an exemplary embodiment of the present invention for obtaining equipment and the method for content association of video file, contribute to obtain easily the content association of the video file of playing, not only can meet user to obtaining the demand of content association, and strengthened and the validity of user interactions, improve user's experience.
Described in conjunction with specific embodiments the present invention above, but enforcement of the present invention is not limited to this.Within the spirit and scope of the present invention, those skilled in the art can carry out various modifications and variations, and these amendments and modification are by within falling into the protection domain of claim restriction.

Claims (21)

1. for obtaining the equipment of content association for video file, comprising:
Content association determining unit, for determining the content association of the video file of playing;
Address acquisition unit, for obtaining the address of content association;
Interface provides unit, for being provided for from the interface of described address acquisition content association.
2. equipment as claimed in claim 1, described equipment also comprises: acquiring unit, the operation of described interface being carried out for basis is from described address acquisition content association.
3. equipment as claimed in claim 1, wherein, content association determining unit comprises at least one in following:
Content analysis unit, for the content of analyzing the video file of playing to determine content association;
Document analysis unit, for the extend information of resolving video file to determine content association;
Content designating unit, for carrying out specified associations content according to user's input.
4. equipment as claimed in claim 3, wherein, content association comprises at least one in following: associated audio, the associated object that shows.
5. equipment as claimed in claim 4, wherein, the volume of the video file that content analysis unit analysis is being play, and size based on volume is determined associated audio, or, the object showing in the video file that content analysis unit analysis is being play, and object-based external appearance characteristic is determined the associated object that shows.
6. equipment as claimed in claim 3, wherein, document analysis unit resolves extend information to be to determine the breakpoint that obtains arranging in extend information, and the content corresponding with obtaining breakpoint is defined as to content association.
7. equipment as claimed in claim 3, wherein, user's input comprises the instruction of selecting content association for the broadcast interface from video file, and the content of selecting according to described instruction is defined as content association by content designating unit.
8. equipment as claimed in claim 1, wherein, address acquisition unit is also obtained the descriptor about content association.
9. equipment as claimed in claim 1, wherein, the address that address acquisition unit is obtained content association with the corresponding extend information of content association from video file.
10. equipment as claimed in claim 1, wherein, address acquisition unit comprises:
Search condition determining unit, for determining the search condition about content association;
Search unit, for utilizing definite search condition to carry out search;
Address determining unit, determines described address for the result from search.
11. equipment as claimed in claim 10, wherein, search condition determining unit is based on determining the search condition about content association with the corresponding extend information of content association in video file.
12. equipment as claimed in claim 10, wherein, content association comprises associated audio, search condition comprises at least one in following: the melody of the theme of associated audio, the title of associated audio, associated audio, the lyrics of associated audio.
13. equipment as claimed in claim 12, wherein, the play position of search condition determining unit analyzing and associating audio frequency in video file, and title based on play position and video file is determined the theme of associated audio, or, the title of search condition determining unit based on determine associated audio in video file with the corresponding extend information of content association, or, search condition determining unit is determined the melody of associated audio by the video file extraction melody from playing, or, search condition determining unit is determined the lyrics of associated audio by extract the lyrics from the subtitle file of video file, or, search condition determining unit is determined the lyrics of associated audio by the video file of playing is carried out to speech recognition.
14. equipment as claimed in claim 10, wherein, content association comprises the associated object that shows, search condition determining unit shows that by association the external appearance characteristic of object is defined as showing about association the search condition of object.
15. equipment as claimed in claim 13, wherein, interface provides unit to be provided for from the interface of described address acquisition associated audio within the played duration of associated audio.
16. equipment as described in claim 1 or 8, wherein, interface provides unit that described interface is presented at at least one in upper/lower positions: near the position in the video file information hurdle in the broadcast interface of the position corresponding to the reproduction time of content association, video file in the playing progress bar of video file, in the broadcast interface of video file near the associated position that shows object.
17. equipment as claimed in claim 16, wherein, described interface comprises with the corresponding button in address of content association, links or menu.
18. equipment as claimed in claim 17, wherein, described button subscript is marked with the designator for identifying content association type, or interface provides unit to show for identifying the designator of content association type on button side.
19. equipment as claimed in claim 17, wherein, described menu comprises at least one address of content association and the descriptor about content association.
20. equipment as claimed in claim 19, wherein, content association comprises associated audio, described at least one address is corresponding to the different editions of associated audio.
21. 1 kinds for obtaining the method for content association of video file, comprising:
Determine the content association of the video file of playing;
Obtain the address of content association;
Be provided for from the interface of described address acquisition content association.
CN201410418639.1A 2014-08-22 2014-08-22 Device and method for acquiring relevant content of video files Pending CN104199885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410418639.1A CN104199885A (en) 2014-08-22 2014-08-22 Device and method for acquiring relevant content of video files

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410418639.1A CN104199885A (en) 2014-08-22 2014-08-22 Device and method for acquiring relevant content of video files

Publications (1)

Publication Number Publication Date
CN104199885A true CN104199885A (en) 2014-12-10

Family

ID=52085178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410418639.1A Pending CN104199885A (en) 2014-08-22 2014-08-22 Device and method for acquiring relevant content of video files

Country Status (1)

Country Link
CN (1) CN104199885A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104731983A (en) * 2015-04-17 2015-06-24 广州炫智电子科技有限公司 Automatic multimedia file classification method and automatic multimedia file classification system of all-in-one machine
CN105163049A (en) * 2015-08-26 2015-12-16 深圳市金锐显数码科技有限公司 Method and device for playing television program
CN105280170A (en) * 2015-10-10 2016-01-27 北京百度网讯科技有限公司 Method and device for playing music score
CN105872811A (en) * 2015-12-31 2016-08-17 乐视网信息技术(北京)股份有限公司 Processing method of video associated music videos and video playing device
CN106162379A (en) * 2015-04-22 2016-11-23 小米科技有限责任公司 video aggregation method and device
CN106375782A (en) * 2016-08-31 2017-02-01 北京小米移动软件有限公司 Video playing method and device
CN107124623A (en) * 2017-05-12 2017-09-01 腾讯科技(深圳)有限公司 The transmission method and device of music file information
CN107203605A (en) * 2017-05-16 2017-09-26 维沃移动通信有限公司 The generation method and mobile terminal of a kind of audio list
CN108762852A (en) * 2018-06-10 2018-11-06 北京酷我科技有限公司 A kind of implementation method of interception Audio Controls and lyrics control linkage effect
CN108924636A (en) * 2018-06-29 2018-11-30 北京优酷科技有限公司 Caption presentation method and device
CN109063141A (en) * 2018-08-06 2018-12-21 包仁妹 Types of songs goodness of fit detection method
CN109672927A (en) * 2018-08-01 2019-04-23 李春莲 Movie contents matching process
CN110019948A (en) * 2018-08-31 2019-07-16 北京字节跳动网络技术有限公司 Method and apparatus for output information
CN110267063A (en) * 2015-12-17 2019-09-20 上海交通大学 The description method of information is presented in a kind of association multimedia content personalization
CN110326301A (en) * 2017-03-21 2019-10-11 多玩国株式会社 Selecting response device, selecting response method and selecting response program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050096084A1 (en) * 2003-11-04 2005-05-05 Seppo Pohja System and method for registering attendance of entities associated with content creation
CN102497580A (en) * 2011-11-30 2012-06-13 苏州奇可思信息科技有限公司 Video information synthesizing method based on audio feature information
CN102906744A (en) * 2010-06-28 2013-01-30 雅虎公司 Infinite browse
CN103024464A (en) * 2011-12-31 2013-04-03 中国科学院计算技术研究所 System and method for providing information related to video playing content
CN103442083A (en) * 2013-09-10 2013-12-11 百度在线网络技术(北京)有限公司 Method, system, clients and server for transmitting correlated contents through audio files

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050096084A1 (en) * 2003-11-04 2005-05-05 Seppo Pohja System and method for registering attendance of entities associated with content creation
CN102906744A (en) * 2010-06-28 2013-01-30 雅虎公司 Infinite browse
CN102497580A (en) * 2011-11-30 2012-06-13 苏州奇可思信息科技有限公司 Video information synthesizing method based on audio feature information
CN103024464A (en) * 2011-12-31 2013-04-03 中国科学院计算技术研究所 System and method for providing information related to video playing content
CN103442083A (en) * 2013-09-10 2013-12-11 百度在线网络技术(北京)有限公司 Method, system, clients and server for transmitting correlated contents through audio files

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104731983B (en) * 2015-04-17 2018-05-18 广州炫智电子科技有限公司 The multimedia file automatic clustering method and system of all-in-one machine
CN104731983A (en) * 2015-04-17 2015-06-24 广州炫智电子科技有限公司 Automatic multimedia file classification method and automatic multimedia file classification system of all-in-one machine
CN106162379A (en) * 2015-04-22 2016-11-23 小米科技有限责任公司 video aggregation method and device
CN110139171A (en) * 2015-04-22 2019-08-16 小米科技有限责任公司 Video aggregation method and device
CN105163049A (en) * 2015-08-26 2015-12-16 深圳市金锐显数码科技有限公司 Method and device for playing television program
CN105280170A (en) * 2015-10-10 2016-01-27 北京百度网讯科技有限公司 Method and device for playing music score
CN110267063A (en) * 2015-12-17 2019-09-20 上海交通大学 The description method of information is presented in a kind of association multimedia content personalization
CN105872811A (en) * 2015-12-31 2016-08-17 乐视网信息技术(北京)股份有限公司 Processing method of video associated music videos and video playing device
CN106375782A (en) * 2016-08-31 2017-02-01 北京小米移动软件有限公司 Video playing method and device
CN106375782B (en) * 2016-08-31 2020-12-18 北京小米移动软件有限公司 Video playing method and device
CN110326301A (en) * 2017-03-21 2019-10-11 多玩国株式会社 Selecting response device, selecting response method and selecting response program
US11064258B2 (en) 2017-03-21 2021-07-13 Dwango Co., Ltd. Reaction selection device, reaction selection method, and reaction selection program
CN107124623A (en) * 2017-05-12 2017-09-01 腾讯科技(深圳)有限公司 The transmission method and device of music file information
CN107203605A (en) * 2017-05-16 2017-09-26 维沃移动通信有限公司 The generation method and mobile terminal of a kind of audio list
CN108762852A (en) * 2018-06-10 2018-11-06 北京酷我科技有限公司 A kind of implementation method of interception Audio Controls and lyrics control linkage effect
CN108924636A (en) * 2018-06-29 2018-11-30 北京优酷科技有限公司 Caption presentation method and device
CN109672927A (en) * 2018-08-01 2019-04-23 李春莲 Movie contents matching process
CN109063141A (en) * 2018-08-06 2018-12-21 包仁妹 Types of songs goodness of fit detection method
CN109063141B (en) * 2018-08-06 2019-10-18 上海倍增智能科技有限公司 A kind of types of songs goodness of fit test platform
CN110019948A (en) * 2018-08-31 2019-07-16 北京字节跳动网络技术有限公司 Method and apparatus for output information
CN110019948B (en) * 2018-08-31 2022-04-26 北京字节跳动网络技术有限公司 Method and apparatus for outputting information

Similar Documents

Publication Publication Date Title
CN104199885A (en) Device and method for acquiring relevant content of video files
US10417278B2 (en) Systems and methods to facilitate media search
TWI716413B (en) Method of fading between a first audio section and a second destination audio section, a computer program product, and an audio system
RU2662125C2 (en) System and method of generating audio file
US7921116B2 (en) Highly meaningful multimedia metadata creation and associations
WO2013086987A1 (en) Software recommendation method and recommendation system
CN111353068A (en) Video recommendation method and device
WO2019245781A1 (en) Video summarization and collaboration systems and methods
US10031714B2 (en) Method and device for processing audio files
US20080301173A1 (en) Method and apparatus for generating playlist of media content and method and apparatus for playing media content
US20140128160A1 (en) Method and system for generating a sound effect in a piece of game software
CN107872685A (en) A kind of player method of multi-medium data, device and computer installation
US10343072B2 (en) Apparatus and method of producing rhythm game, and non-transitory computer readable medium
WO2017113857A1 (en) Processing method for video associated music clips and video playing device
EP3839938A1 (en) Karaoke query processing system
US20120323559A1 (en) Information processing apparatus, information processing method, and program
US20130339853A1 (en) Systems and Method to Facilitate Media Search Based on Acoustic Attributes
CN112086082A (en) Voice interaction method for karaoke on television, television and storage medium
KR101547525B1 (en) Automatic music selection apparatus and method considering user input
CN111046226A (en) Music tuning method and device
JP2008192102A (en) Metadata generation device and metadata generation method
KR101580247B1 (en) Device and method of rhythm analysis for streaming sound source
JP2007114919A (en) Information processor and information processing program
Brinkman et al. Online music recognition: the Echoprint system
CN115687683A (en) Music searching method, device, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20141210

RJ01 Rejection of invention patent application after publication