CN102915725A - Human-computer interaction song singing system and method - Google Patents
Human-computer interaction song singing system and method Download PDFInfo
- Publication number
- CN102915725A CN102915725A CN2012103334895A CN201210333489A CN102915725A CN 102915725 A CN102915725 A CN 102915725A CN 2012103334895 A CN2012103334895 A CN 2012103334895A CN 201210333489 A CN201210333489 A CN 201210333489A CN 102915725 A CN102915725 A CN 102915725A
- Authority
- CN
- China
- Prior art keywords
- module
- unit
- audio
- tone
- performance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention provides a human-computer interaction song singing system and a human-computer interaction song singing method. The human-computer interaction song singing system comprises a server module, a play module, a display module, a sound acquisition module, a sound output module and a sound analysis module; the server module is used for storing audio and video files and subtitle files of songs; the play module is used for acquiring the audio and video files and the subtitle files from the server module; the sound acquisition module is used for acquiring singing audio information of a singer; the sound analysis module is used for analyzing the singing audio information into singing tones and outputting the singing tones obtained by analysis to the display module; the display module is used for receiving and displaying the video files, the subtitle files, the audio files and the singing tones; and the sound output module is used for playing an audio of a song and an audio of the singer. By adopting the method, the problems that in the prior art, the human-computer interaction effect is poor and the scoring cannot be carried out by intonation can be solved.
Description
Technical field
The present invention relates to a kind of type singing system and method, especially relate to a kind of singing songs system and method for human-computer interaction.
Background technology
At present in the KTV occasion, the audiovisual audio-video system, personal portable, the type singing system of computer etc., the singer can only sing to listen accompaniment or original singer, can not know the singing level of oneself, what some systems were fortunately arranged can take the lyrics mode of gradient color to show, allow the client know the progress of performance, type singing system is also arranged at present on the market, can be by showing singer's volume, thereby the typical problem of contrast song, but it can not carry out the real-time analysis demonstration, and the effect of human-computer interaction is low, and passes judgment on the performance level by volume, as long as the volume of singing reaches certain level, even if the melody that the singer sings is not right, system also can judge and obtain high score, and the accuracy of this evaluation method is not high, the singer also is unable to find out the accuracy that oneself is sung, and is little to singer's the level rise help.
Another situation is: existing type singing system, its performance pitch, volume to the singer gathers demonstration, but pitch and the volume with standard is not used for contrasting demonstration, also unclear own gap of following the original singer is where in performance for the singer, this system, can not provide for the user suggestion of the performance level how to improve, the weak effect of its human-computer interaction.
Summary of the invention
Technical matters solved by the invention is: a kind of singing songs system of human-computer interaction and the singing songs method of human-computer interaction are provided, and it can solve the problem that can not compare by accuracy in pitch performance level and human-computer interaction weak effect in the prior art.
For solving the problems of the technologies described above, the invention provides a kind of singing songs system of human-computer interaction, comprise server module, playing module, display module, sound collection module, voice output module and sound parsing module, described server module is used for audio file, video file and the subtitle file of storage song, carry video file, subtitle file or the audio file of respective songs to described playing module, carry audio file to described voice output module; Described playing module is used for notifying described server module to send video file, subtitle file or the audio file of respective songs, and video file, subtitle file or the audio file that receives sent to display module, include song tone and the lyrics in the described subtitle file; Described sound collection module is used for gathering singer's performance audio-frequency information, and described audio-frequency information is outputed to respectively described voice output module and described sound parsing module; Described sound parsing module is resolved the performance audio-frequency information that receives, and extracts the performance tone, and performance tone, volume after will resolving output to display module; Described display module reception and demonstration are from video file, subtitle file and the audio file of playing module, receive to show from the performance tone of sound parsing module and sing volume, with the corresponding demonstration of song tone that comprises in described performance tone and the described subtitle file; Described voice output module is used for playing in real time from the audio-frequency information of the audio file of server module or song video file with from the performance audio-frequency information of sound collection module.Also comprise grading module, described grading module will be resolved scoring from the performance tone information of sound parsing module with from the song tone information of server module, and output appraisal result to display module shows.
Wherein, described sound collection module comprises the voice opertaing device, and described voice opertaing device is by note amplifier and audio distribution device, and the audio frequency of sound collection module collection is divided into two-way output, one the tunnel outputs to the sound parsing module, and one the tunnel outputs to the voice output module.
Wherein, described display module also changes according to the height of singing volume, the alpha(transparency of the corresponding viewing area of control volume) value.
Wherein, described sound parsing module comprises signature analysis unit and feature display unit, described signature analysis unit is used for extracting the performance frequency from the audio-frequency information of sound collection module, and described performance frequency inverted become to sing tone and be transported to described feature display unit, described feature display unit is used for sending the order of performance tone display position to described display module, and described performance tone information is outputed to described display module.
Wherein, to extract the method for singing frequency be by auto-correlation algorithm in short-term in described signature analysis unit.
Wherein, described server module comprises the bent library unit of video, subtitle file library unit, the bent library unit of audio frequency and server unit; The bent library unit of described video, subtitle file library unit, the bent library unit of audio frequency all are connected with described server unit; Described playing module comprises video playback unit, audio playing unit and captions broadcast unit; Described server unit is connected with described video playback unit, audio playing unit, captions broadcast unit respectively; Described server unit receives the video request information from the video playback unit, calls corresponding song video file to described video playback unit from the bent library unit of described video; Described server unit also can receive the audio frequency playing request information from audio playing unit, calls corresponding audio file to the audio playing unit of writing from the bent library unit of described audio frequency; Described server unit receives the captions solicited message from the captions broadcast unit, calls corresponding subtitle file and is sent to described video playback unit from described subtitle file library unit; Described video playback unit, described audio playing unit, described captions broadcast unit are connected with described display module respectively.
For solving the problems of the technologies described above, the present invention also provides a kind of singing songs method of human-computer interaction, may further comprise the steps:
S1: obtain the subtitle file of current unit interval, and show captions and song tone information in the subtitle file;
S2: gather the performance audio frequency of external singer's input of current unit interval, described performance audio frequency is resolved, obtain the performance tone of the performance sound of unit interval;
S3: with the corresponding demonstration of described performance tone and the song tone that is complementary, the next unit interval of redirect, return S1.
Wherein, described step S2 comprises:
S201: the frequency of extracting described performance audio frequency;
S202: described frequency inverted is become to sing tone.
Wherein, described step S3 is specially: calculate the difference of described performance tone and described song tone, and have distance ground to show described performance tone and described song tone according to described difference.
Adopt technique scheme, with the problem that can not carry out tone contrast performance level and human-computer interaction weak effect before, the beneficial effect that the present invention possesses is: because the subtitle file that the present invention adopts includes the performance tone, by display module described performance tone is shown simultaneously, correlative value as standard, simultaneously, adopted the sound parsing module that sound signal is processed, extract performance tone wherein, and carry out corresponding demonstration with the song tone of server module, the singer just can see the concert pitch of song and the tone of self singing simultaneously when singing like this, contrast gap wherein, in time adjust the performance of oneself, human-computer interaction effective, utilize simultaneously tone information to pass judgment on the performance level, whether its accuracy in pitch that can directly estimate the singer reaches standard, so just can greatly improve singer's performance level.
Description of drawings
Fig. 1 is the structural drawing of the type singing system of a kind of human-computer interaction provided by the invention;
Fig. 2 is the structural drawing of another kind of embodiment provided by the invention;
Fig. 3 is the process flow diagram of the singing method of a kind of human-computer interaction provided by the invention;
Fig. 4 is the performance equipment composition diagram of a kind of human-computer interaction provided by the invention.
Embodiment
By describing technology contents of the present invention, structural attitude in detail, being realized purpose and effect, below in conjunction with embodiment and cooperate accompanying drawing to give in detail explanation.
See also Fig. 1, comprise server module, playing module, display module, sound collection module, voice output module and sound parsing module, described server module is used for audio file, video file and the subtitle file of storage song, carry video file, subtitle file or the audio file of respective songs to described playing module, carry audio file to described voice output module; Described playing module is used for notifying described server module to send video file, subtitle file or the audio file of respective songs, and video file, subtitle file or the audio file that receives sent to display module, include song tone and the lyrics in the described subtitle file; Described sound collection module is used for gathering singer's performance audio-frequency information, and described audio-frequency information is outputed to respectively described voice output module and described sound parsing module; Described sound parsing module is resolved the performance audio-frequency information that receives, and extracts the performance tone, and performance tone, volume after will resolving output to display module; Described display module reception and demonstration are from video file, subtitle file and the audio file of playing module, receive to show from the performance tone of sound parsing module and sing volume, with the corresponding demonstration of song tone that comprises in described performance tone and the described subtitle file; Described voice output module is used for playing in real time from the audio-frequency information of the audio file of server module or song video file with from the performance audio-frequency information of sound collection module.Also comprise grading module, described grading module will be resolved scoring from the performance tone information of sound parsing module with from the song tone information of server module, and output appraisal result to display module shows.
During KTV used, the sound of microphone can't pass through set-top box, but outputs to loudspeaker by power amplifier.Normally the sound of microphone is divided into two, and one the tunnel is input to power amplifier, and other one the tunnel is input to the audio collection mouth of set-top box, namely in the sound parsing module, is used for analyzing the audio frequency that parsing collects.The sound parsing module is resolved the performance audio-frequency information that receives, extract the performance tone, and performance tone, volume after will resolving output to display module, concrete, the sound parsing module obtains current volume value, if current volume value obtains volume value greater than the last time take zero as benchmark, then the vernier figure of control display module respective output volume viewing area is shown in green, increases one by one then graphic color and deepens gradually; If otherwise volume value reduce one by one, then the color look that shoals one by one is transparent.It is alpha(transparency by the software adjustment viewing area that the depth of color changes) value realizes.Adopt such method, realize the effect that the volume vernier is fade-in fade-out.
Further, described display module also according to singing the different of tone and described song tone value, differentially is presented on the differing heights of display.In the present embodiment, display module is with song tone, copy staff pitch representation, show pitch with the different standard horizontal line of height, length with line represents the duration of a sound, in playback of songs, indicate current play position by the color of transformation line or with mark, with the progress of reminding user song; For the expression of singing tone, can the mark of vernier shape be set by on the standard lines of song tone, the position of vernier on the standard horizontal line of song tone determined by described difference; For example, if it is high to sing the pitch ratio song tone, vernier certain position above the standard horizontal line then, the distance of this position is directly proportional with described difference, certainly, and if it is low to sing the pitch ratio song tone, the certain position of vernier below the standard horizontal line then.
Show in real time song tone, also can pass through other difformities, or the method demonstration such as colour switching, in a word, it is multiple that the mode of demonstration has, and repeats no more here.
See also Fig. 1, in the present embodiment, grading module of the present invention will from the performance tone information of sound parsing module with from the song tone information of server module, resolve scoring, and output appraisal result to display module show.The mode of its scoring is that described performance tone and described song tone are compared, and by certain algorithm, draws score value, shows at display module again.
See also Fig. 2, Fig. 2 is a kind of specific embodiment provided by the invention, wherein, described sound parsing module comprises signature analysis unit and feature display unit, described signature analysis unit is used for extracting the performance frequency from the audio-frequency information of sound collection module, and described performance frequency inverted become to sing tone and be transported to described feature display unit, it is by auto-correlation algorithm in short-term that the signature analysis unit extracts the method for singing frequency.It is worth mentioning that, the method for extraction performance frequency has a variety of, and what this patent was used is correlation method, also has in addition average magnitude difference function method, parallel processing method, Cepstrum Method etc.Described feature display unit is used for sending the order of performance tone display position to described display module, and described performance tone information is outputed to described display module.
Described server module comprises the bent library unit of video, the bent library unit of audio frequency, subtitle file library unit and server unit; The bent library unit of described video, the bent library unit of audio frequency, subtitle file library unit all are connected with described server unit; Described playing module comprises video playback unit, audio playing unit and captions broadcast unit; Described server unit is connected with described video playback unit, audio playing unit, captions broadcast unit respectively; Described server unit receives the video request information from the video playback unit, calls corresponding song video file to described video playback unit from the bent library unit of described video; Described server unit also can receive the audio request from audio playing unit, calls corresponding song audio files to audio playing unit from the bent library unit of described audio frequency; Described server unit receives the captions solicited message from the captions broadcast unit, calls corresponding subtitle file and is sent to described video playback unit from described subtitle file library unit; Described video playback unit, described audio playing unit, described captions broadcast unit are connected with described display module respectively.
Concrete, described grading module is connected with the server unit of described server module, in order to obtain by server unit in the subtitle file of subtitle file library unit, thereby obtain corresponding tone information in order to analyze, grading module links to each other with the signature analysis unit of described sound parsing module, thereby obtains the performance tone, by described performance tone and song tone are compared, draw scoring, and output to described display module and demonstrate score value.
The comment module comprises in real time comment of simple sentence and final comment.In real time comment is adopted and is set a plurality of threshold values, in the different literal evaluation of different interval promptings.Final comment is slightly different, because the score of grading module is various dimensions (pitch, melody, high pitch, volumes etc.), so final comment also is the comprehensive evaluation of various dimensions.Main composition is: comprehensive evaluation+individual event comment.
Further, described voice output module is connected with described server unit, and to obtain the song audio frequency in the bent library unit of video, described voice output module also is connected with described sound collection module by described server unit, obtain the performance audio frequency, thus two kinds of audio frequency of simultaneously coupling output.
See also Fig. 3, the present invention also provides a kind of singing songs method of human-computer interaction, may further comprise the steps:
S1: obtain the subtitle file of current reproduction time by server module, and captions and song tone information in display module demonstration subtitle file;
S2: gather the performance audio frequency of external singer's input of current unit interval by the sound collection module, by the sound parsing module described performance sound is resolved, obtain the performance tone of the performance sound of unit interval;
S3: with the corresponding demonstration of described performance tone and the song tone that is complementary, the next unit interval of redirect, return S1.
Further, described step S2 comprises:
S201: the frequency of extracting described performance audio frequency;
S202: described frequency inverted is become to sing tone.
Especially, described step S3 is specially: calculate the difference of described performance tone and described song tone, and have distance ground to show described performance tone and described song tone according to described difference.
See also Fig. 4, Fig. 4 provides a kind of singing songs equipment of human-computer interaction, comprise server, set-top box, router, microphone, display and stereo set, described set-top box is connected with described server by described router, and described microphone, display, stereo set are connected with described set-top box respectively.Wherein, described set-top box, router, display, stereo set, microphone are not defined as one.
Its principle of work is: described server receives the solicited message of described set-top box, corresponding audio file is provided, video file and subtitle file, described microphone is used for gathering the performance audio frequency, and be transported to described set-top box, the performance audio frequency that described set-top box will collect is resolved, and based on type singing system and the method for above-mentioned human-computer interaction, obtain the performance tone, again with described subtitle file in song tone compare, and control display equipment shows song tone, sing tone and appraisal result, described set-top box is also controlled described stereo set and is exported the performance audio frequency that described microphone collects simultaneously.Adopt this mode, the user just can realize the performance of human-computer interaction.
The above only is embodiments of the invention; be not so limit claim of the present invention; every equivalent structure or equivalent flow process conversion that utilizes instructions of the present invention and accompanying drawing content to do; or directly or indirectly be used in other relevant technical fields, all in like manner be included in the scope of patent protection of the present invention.
Claims (9)
1. the singing songs system of a human-computer interaction is characterized in that, comprises server module, playing module, display module, sound collection module, voice output module and sound parsing module,
Described server module is used for audio file, video file and the subtitle file of storage song, and video file, subtitle file or the audio file of carrying respective songs carry audio file to described voice output module to described playing module;
Described playing module is used for sending to described server module the instruction of the audio file, video file and the subtitle file that obtain respective songs, and look audio file, video file and the subtitle file that will receive send to display module, includes song tone and the lyrics in the described subtitle file;
Described sound collection module is used for gathering singer's performance audio-frequency information, and described audio-frequency information is outputed to respectively described voice output module and described sound parsing module;
Described sound parsing module is resolved the performance audio-frequency information that receives, and extract to sing tone, volume, and performance tone, volume after will resolving outputs to display module;
Described display module reception and demonstration are from audio file, video file and the subtitle file of playing module, receive to show from the performance tone of sound parsing module and sing volume, with the corresponding demonstration of song tone that comprises in described performance tone and the described subtitle file;
Described voice output module is used for playing in real time from the audio-frequency information of the audio file of server module or song video file with from the performance audio-frequency information of sound collection module;
Also comprise grading module, described grading module will be resolved scoring from the performance tone information of sound parsing module with from the song tone information of server module, and output appraisal result to display module shows.
2. the singing songs system of human-computer interaction according to claim 1, it is characterized in that: described sound collection module comprises the voice opertaing device, described voice opertaing device is by note amplifier and audio distribution device, the audio frequency of sound collection module collection is divided into two-way output, one the tunnel outputs to the sound parsing module, and one the tunnel outputs to the voice output module.
3. the singing songs system of human-computer interaction according to claim 1 is characterized in that: described display module also changes according to the height of singing volume, a lpha(transparency of the corresponding viewing area of control volume) value.
4. the singing songs system of human-computer interaction according to claim 1, it is characterized in that: described sound parsing module comprises signature analysis unit and feature display unit, described signature analysis unit is used for extracting the performance frequency from the audio-frequency information of sound collection module, and described performance frequency inverted become to sing tone and be transported to described feature display unit, described feature display unit is used for sending the order of performance tone display position to described display module, and described performance tone information is outputed to described display module.
5. the singing songs system of human-computer interaction according to claim 4 is characterized in that: it is by auto-correlation algorithm in short-term that described signature analysis unit extracts the method for singing frequency.
6. the singing songs system of human-computer interaction according to claim 1 is characterized in that: described server module comprises the bent library unit of video, subtitle file library unit, the bent library unit of audio frequency and server unit; The bent library unit of described video, subtitle file library unit, the bent library unit of audio frequency all are connected with described server unit;
Described playing module comprises video playback unit, audio playing unit and captions broadcast unit;
Described server unit is connected with described video playback unit, audio playing unit, captions broadcast unit respectively;
Described server unit receives the video request information from the video playback unit, calls corresponding song video file to described video playback unit from the bent library unit of described video;
Described server unit also can receive the audio frequency playing request information from audio playing unit, calls corresponding audio file to the audio playing unit of writing from the bent library unit of described audio frequency;
Described server unit receives the captions solicited message from the captions broadcast unit, calls corresponding subtitle file and is sent to described video playback unit from described subtitle file library unit;
Described video playback unit, described audio playing unit, described captions broadcast unit are connected with described display module respectively.
7. the singing songs method of a human-computer interaction is characterized in that, may further comprise the steps:
S1: obtain the subtitle file of current unit interval, and show captions and song tone information in the subtitle file;
S2: gather the performance audio frequency of external singer's input of current unit interval, described performance audio frequency is resolved, obtain the performance tone of the performance sound of unit interval;
S3: with the corresponding demonstration of described performance tone and the song tone that is complementary, the next unit interval of redirect, return S1.
8. the singing songs method of human-computer interaction according to claim 7 is characterized in that, described step S2 comprises:
S201: the frequency of extracting described performance audio frequency;
S202: described frequency inverted is become to sing tone.
9. the singing songs method of human-computer interaction according to claim 7, it is characterized in that, described step S3 is specially: calculate the difference of described performance tone and described song tone, and have distance ground to show described performance tone and described song tone according to described difference.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012103334895A CN102915725A (en) | 2012-09-10 | 2012-09-10 | Human-computer interaction song singing system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012103334895A CN102915725A (en) | 2012-09-10 | 2012-09-10 | Human-computer interaction song singing system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102915725A true CN102915725A (en) | 2013-02-06 |
Family
ID=47614063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2012103334895A Pending CN102915725A (en) | 2012-09-10 | 2012-09-10 | Human-computer interaction song singing system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102915725A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103179196A (en) * | 2013-03-05 | 2013-06-26 | 福建星网视易信息系统有限公司 | System and method for achieving sing terminal networking interaction and data center |
CN103955490A (en) * | 2014-04-16 | 2014-07-30 | 华为技术有限公司 | Audio playing method and audio playing equipment |
CN104538011A (en) * | 2014-10-30 | 2015-04-22 | 华为技术有限公司 | Tone adjusting method and device and terminal device |
CN104811829A (en) * | 2014-01-23 | 2015-07-29 | 苏州乐聚一堂电子科技有限公司 | Karaoke interactive multifunctional special effect system |
CN104883516A (en) * | 2015-06-05 | 2015-09-02 | 福建星网视易信息系统有限公司 | Method and system for producing real-time singing video |
CN105187936A (en) * | 2015-06-15 | 2015-12-23 | 福建星网视易信息系统有限公司 | Multimedia file playing method and device based on singing audio scoring |
WO2016192395A1 (en) * | 2015-06-05 | 2016-12-08 | 福建星网视易信息系统有限公司 | Singing score display method, apparatus and system |
CN106448630A (en) * | 2016-09-09 | 2017-02-22 | 腾讯科技(深圳)有限公司 | Method and device for generating digital music file of song |
CN106488264A (en) * | 2016-11-24 | 2017-03-08 | 福建星网视易信息系统有限公司 | Singing the live middle method, system and device for showing the lyrics |
CN106548784A (en) * | 2015-09-16 | 2017-03-29 | 广州酷狗计算机科技有限公司 | A kind of evaluation methodology of speech data and system |
CN106920560A (en) * | 2017-03-31 | 2017-07-04 | 北京小米移动软件有限公司 | Singing songses mass display method and device |
CN106921878A (en) * | 2017-03-01 | 2017-07-04 | 珠海迈科智能科技股份有限公司 | The data processing method and device of Set Top Box |
CN108922562A (en) * | 2018-06-15 | 2018-11-30 | 广州酷狗计算机科技有限公司 | Sing evaluation result display methods and device |
CN109697047A (en) * | 2017-10-23 | 2019-04-30 | 北京淳中科技股份有限公司 | Audio signal monitors method, apparatus, system, array display unit and split screen system |
CN109905789A (en) * | 2017-12-10 | 2019-06-18 | 张德明 | A kind of K song microphone |
CN111028618A (en) * | 2019-12-27 | 2020-04-17 | 郑州工程技术学院 | Vocal music singing simulation training platform |
CN111382931A (en) * | 2020-03-03 | 2020-07-07 | 黄淮学院 | Vocal music singing skill detection system |
CN111475672A (en) * | 2020-03-27 | 2020-07-31 | 咪咕音乐有限公司 | Lyric distribution method, electronic equipment and storage medium |
CN111640344A (en) * | 2020-07-07 | 2020-09-08 | 蚌埠学院 | Mutual-aid chemical experiment operation assessment method based on mobile phone video |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5715179A (en) * | 1995-03-31 | 1998-02-03 | Daewoo Electronics Co., Ltd | Performance evaluation method for use in a karaoke apparatus |
JP3344195B2 (en) * | 1996-02-16 | 2002-11-11 | ヤマハ株式会社 | Karaoke scoring device |
CN101707679A (en) * | 2009-10-30 | 2010-05-12 | 深圳创维-Rgb电子有限公司 | Television, karaoke marking system and method thereof |
CN101859560A (en) * | 2009-04-07 | 2010-10-13 | 林文信 | Automatic marking method for karaok vocal accompaniment |
CN101902599A (en) * | 2009-05-27 | 2010-12-01 | 索尼公司 | Device for display of message, method for information display and information display program product |
CN102110435A (en) * | 2009-12-23 | 2011-06-29 | 康佳集团股份有限公司 | Method and system for karaoke scoring |
-
2012
- 2012-09-10 CN CN2012103334895A patent/CN102915725A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5715179A (en) * | 1995-03-31 | 1998-02-03 | Daewoo Electronics Co., Ltd | Performance evaluation method for use in a karaoke apparatus |
JP3344195B2 (en) * | 1996-02-16 | 2002-11-11 | ヤマハ株式会社 | Karaoke scoring device |
CN101859560A (en) * | 2009-04-07 | 2010-10-13 | 林文信 | Automatic marking method for karaok vocal accompaniment |
CN101902599A (en) * | 2009-05-27 | 2010-12-01 | 索尼公司 | Device for display of message, method for information display and information display program product |
CN101707679A (en) * | 2009-10-30 | 2010-05-12 | 深圳创维-Rgb电子有限公司 | Television, karaoke marking system and method thereof |
CN102110435A (en) * | 2009-12-23 | 2011-06-29 | 康佳集团股份有限公司 | Method and system for karaoke scoring |
Non-Patent Citations (1)
Title |
---|
浙江卫视,盛大集团: "蓝巨星音准评测系统", 《我是大评委》 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103179196B (en) * | 2013-03-05 | 2016-02-03 | 福建凯米网络科技有限公司 | A kind of realize singing terminal networking interaction system, method and data center |
CN103179196A (en) * | 2013-03-05 | 2013-06-26 | 福建星网视易信息系统有限公司 | System and method for achieving sing terminal networking interaction and data center |
CN104811829A (en) * | 2014-01-23 | 2015-07-29 | 苏州乐聚一堂电子科技有限公司 | Karaoke interactive multifunctional special effect system |
CN103955490A (en) * | 2014-04-16 | 2014-07-30 | 华为技术有限公司 | Audio playing method and audio playing equipment |
CN104538011A (en) * | 2014-10-30 | 2015-04-22 | 华为技术有限公司 | Tone adjusting method and device and terminal device |
CN104883516B (en) * | 2015-06-05 | 2018-08-14 | 福建凯米网络科技有限公司 | It is a kind of to make the method and system for singing video in real time |
CN104883516A (en) * | 2015-06-05 | 2015-09-02 | 福建星网视易信息系统有限公司 | Method and system for producing real-time singing video |
WO2016192395A1 (en) * | 2015-06-05 | 2016-12-08 | 福建星网视易信息系统有限公司 | Singing score display method, apparatus and system |
WO2016201959A1 (en) * | 2015-06-15 | 2016-12-22 | 福建星网视易信息系统有限公司 | Method of playing back multimedia file on the basis of singing score and device utilizing same |
CN105187936A (en) * | 2015-06-15 | 2015-12-23 | 福建星网视易信息系统有限公司 | Multimedia file playing method and device based on singing audio scoring |
CN105187936B (en) * | 2015-06-15 | 2018-08-21 | 福建星网视易信息系统有限公司 | Based on the method for broadcasting multimedia file and device for singing audio scoring |
CN106548784B (en) * | 2015-09-16 | 2020-04-24 | 广州酷狗计算机科技有限公司 | Voice data evaluation method and system |
CN106548784A (en) * | 2015-09-16 | 2017-03-29 | 广州酷狗计算机科技有限公司 | A kind of evaluation methodology of speech data and system |
CN106448630A (en) * | 2016-09-09 | 2017-02-22 | 腾讯科技(深圳)有限公司 | Method and device for generating digital music file of song |
US10923089B2 (en) | 2016-09-09 | 2021-02-16 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for generating digital score file of song, and storage medium |
CN106448630B (en) * | 2016-09-09 | 2020-08-04 | 腾讯科技(深圳)有限公司 | Method and device for generating digital music score file of song |
CN106488264A (en) * | 2016-11-24 | 2017-03-08 | 福建星网视易信息系统有限公司 | Singing the live middle method, system and device for showing the lyrics |
CN106921878A (en) * | 2017-03-01 | 2017-07-04 | 珠海迈科智能科技股份有限公司 | The data processing method and device of Set Top Box |
CN106920560A (en) * | 2017-03-31 | 2017-07-04 | 北京小米移动软件有限公司 | Singing songses mass display method and device |
CN109697047B (en) * | 2017-10-23 | 2022-08-26 | 北京淳中科技股份有限公司 | Audio signal monitoring method, device and system, array imager and split screen system |
CN109697047A (en) * | 2017-10-23 | 2019-04-30 | 北京淳中科技股份有限公司 | Audio signal monitors method, apparatus, system, array display unit and split screen system |
CN109905789A (en) * | 2017-12-10 | 2019-06-18 | 张德明 | A kind of K song microphone |
CN108922562A (en) * | 2018-06-15 | 2018-11-30 | 广州酷狗计算机科技有限公司 | Sing evaluation result display methods and device |
CN111028618A (en) * | 2019-12-27 | 2020-04-17 | 郑州工程技术学院 | Vocal music singing simulation training platform |
CN111382931A (en) * | 2020-03-03 | 2020-07-07 | 黄淮学院 | Vocal music singing skill detection system |
CN111382931B (en) * | 2020-03-03 | 2023-09-01 | 黄淮学院 | Vocal music singing skill detection system |
CN111475672A (en) * | 2020-03-27 | 2020-07-31 | 咪咕音乐有限公司 | Lyric distribution method, electronic equipment and storage medium |
CN111475672B (en) * | 2020-03-27 | 2023-12-08 | 咪咕音乐有限公司 | Lyric distribution method, electronic equipment and storage medium |
CN111640344A (en) * | 2020-07-07 | 2020-09-08 | 蚌埠学院 | Mutual-aid chemical experiment operation assessment method based on mobile phone video |
CN111640344B (en) * | 2020-07-07 | 2021-11-19 | 蚌埠学院 | Mutual-aid chemical experiment operation assessment method based on mobile phone video |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102915725A (en) | Human-computer interaction song singing system and method | |
EP3522151A1 (en) | Method and device for processing dual-source audio data | |
CN101984490B (en) | Word-for-word synchronous lyric file generating method and system thereof | |
US9317500B2 (en) | Synchronizing translated digital content | |
CN105788610B (en) | Audio-frequency processing method and device | |
MXPA05007300A (en) | Method for creating and accessing a menu for audio content without using a display. | |
CN103137167A (en) | Method for playing music and music player | |
CN103187046A (en) | Display control apparatus and method | |
CN109740150A (en) | Address resolution method, device, computer equipment and computer readable storage medium | |
Prockup et al. | Orchestral performance companion: Using real-time audio to score alignment | |
CN106611603A (en) | Audio processing method and audio processing device | |
Müller et al. | Interactive fundamental frequency estimation with applications to ethnomusicological research | |
CN103474082A (en) | Multi-microphone vocal accompaniment marking system and method thereof | |
CN109584859A (en) | Phoneme synthesizing method and device | |
CN102044176A (en) | Interactive instructional device and instructional method thereof | |
CN105280206A (en) | Audio playing method and device | |
CN201946138U (en) | Interactive teaching device | |
CN113658594A (en) | Lyric recognition method, device, equipment, storage medium and product | |
CN108269437A (en) | A kind of national music study device based on augmented reality | |
KR20140115536A (en) | Apparatus for editing of multimedia contents and method thereof | |
CN111554257A (en) | Note comparison system of traditional Chinese national musical instrument and use method thereof | |
Battier | Describe, Transcribe, Notate: Prospects and problems facing electroacoustic music | |
CN108847067A (en) | A kind of Aural-comprehension Training system | |
CN103680561A (en) | System and method for synchronizing human voice signal and text description data of human voice signal | |
Kim | Vocal Separation in Music Using SVM and Selective Frequency Subtraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20130206 |