WO2010048025A1 - Music recording comparison engine - Google Patents
Music recording comparison engine Download PDFInfo
- Publication number
- WO2010048025A1 WO2010048025A1 PCT/US2009/060831 US2009060831W WO2010048025A1 WO 2010048025 A1 WO2010048025 A1 WO 2010048025A1 US 2009060831 W US2009060831 W US 2009060831W WO 2010048025 A1 WO2010048025 A1 WO 2010048025A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- recording
- recordings
- wave form
- computer
- time
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/32—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
- G11B27/327—Table of contents
- G11B27/329—Table of contents on a disc [VTOC]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
- G10H2240/141—Library retrieval matching, i.e. any of the steps of matching an inputted segment or phrase with musical database contents, e.g. query by humming, singing or playing; the steps may include, e.g. musical analysis of the input, musical feature extraction, query formulation, or details of the retrieval process
Abstract
A method is provided to compare recordings that includes providing a list of a plurality of recordings and selecting at least a first recording and a second recording from the plurality of recordings. The first recording and the second recording have predetermined lengths. The method also includes selecting a first portion of the first recording, identifying a second portion of the second recording based on the first portion, and comparing the first portion to the second portion. When the entire length of the recording is selected as the first portion, the entire length of the second recording can be identified as the second portion. Identifying a second portion may also include normalizing it to the first portion. Also, identifying the second portion can include translating the first portion into a wave-form, searching the second recording for another wave-form similar to the translated wave-form, and matching the translated wave form with the similar wave-form.
Description
TITLE
MUSIC RECORDING COMPARISON ENGINE
Field of the Invention
[0001] The invention relates generally to recordings, and more particularly to methods for comparing two or more time-based segments of audio recordings.
BACKGROUND OF THE INVENTION
(0002] Music is often classified by genres, such as pop/rock, classical, folk, blues, country, and the like. Also, different versions of the same song are often recorded by different artists, and even the same artist may record variations of their own songs over the span of their career, such as, for example by performing a solo as a duet and performing an instrumentally accompanied song a cappella. While some music genres, such as pop/rock, have fewer different versions of the same song other genres lend themselves to having multiple versions. For example, genres that include predominantly older works, such as classic rock, "oldies" music, folk music, and classical music, often have identical portions of a work recorded by many different artists. For example, three different orchestras each conducted by a different conductor can each have its performance of Beethoven's Sixth Symphony recorded, resulting in three inherently unique versions of that symphony. However, because each performance is unique, a simple search for Beethoven's Sixth Symphony, either in a music store or in an online Internet library, will list three separate entries. Unfortunately, without
knowing anything more than the title of the work, a person cannot easily ascertain the differences between the three recordings without listening to all three songs individually, which may not be convenient for a shopper who must either sample a small portion of the recording or purchase the entire recording prior to listening. In the case of sampling a portion of the recordings, the small sampling of each work may be insufficient to discern the differences and may not sample the portion of the music that the listener is particularly interested in isolating.
SUMMARY OF THE INVENTION
[0003] A method is provided to compare recordings that includes providing a list of a plurality of recordings and selecting at least a first recording and a second recording from the plurality of recordings. The first recording and the second recording have predetermined lengths. The method also includes selecting a first portion of the first recording, identifying a second portion of the second recording based on the first portion, and comparing the first portion to the second portion. When the entire length of the recording is selected as the first portion, the entire length of the second recording can be identified as the second portion Identifying a second portion may also include normalizing it to the first portion. Also, identifying the second portion can include translating the first portion into a wave-form, searching the second recording for another wave-form similar to the translated wave-form, and matching the translated wave form with the similar wave-form.
BRIEF DESCRIPTION OF THE DRAWINGS:
[0004] The features and advantages of the present invention will become more apparent from the following description of the invention taken in conjunction with the accompanying drawings, wherein like reference characters designate the same or similar parts, and wherein:
[0005] Fig. I shows a listing of a plurality of recordings presented in a web page in accordance with an embodiment of the invention
[0006] Fig. 2 is a flow chart of a method in accordance with an embodiment of the invention.
[0007] Fig. 3 is another flow chart of a method in accordance with another embodiment of the invention.
[0008] Fig. 3 A is a time chart showing the start and end times of two portions of two recordings desired for comparison.
[0009] Fig. 4 is another flow chart of a method in accordance with another embodiment of the invention.
[0010] Fig. 5 is a time chart showing the relationship between two recordings to be compared in accordance with an example embodiment of the invention.
[0011] Fig. 6 is another flow chart of a method in accordance with another embodiment of the inventioa
[0012] Fig. 7 shows the comparison of wave-forms of two recordings in accordance with an embodiment of the invention.
DETAILED DESCRIPTION
[0013] In a first aspect of the invention, a method is provided to compare recordings, which is particularly suitable for comparing audio recordings. The method may be especially suited for comparing audio recordings, but is not so limited. In a first aspect of the invention, the method includes providing a list of a plurality of recordings and selecting at least a first recording and a second recording from the plurality of recordings. The first recording and the second recording have predetermined lengths. The method also includes selecting a first portion of the first recording, identifying a second portion of the second recording based on the first portion, and comparing the first portion to the second portion. The list of the recordings may be presented, for example, in a web page which can be viewed by an Internet user through a home computer or other internet enabled device, including, but not limited to a computer, telephone, personal digital assistant, digital media player, set top box, or Internet appliance. Moreover, the list of recordings may be presented on a television through a cable television signal. Also, the list may be stored locally on a client computer and the recordings themselves may also be stored on the client computer which may not
be connected to a network. It is to be understood however, that the methods for comparing audio compositions in the context of an Internet based system, described herein are not limited to use in conjunction with any specific technology or device, such as such Internet technology or a computer, but may be broadly practiced separate from such technologies.
[0014] Fig. 1 shows an example of a web page in accordance with a preferred embodiment of the present invention that provides access to classical music to subscribers to the Internet archive. For example, the Internet archive can include a database of musical recordings that can be accessed for downloading and listening through the subscriber web page interface shown in Fig. 1. It should be noted that it is common for users of personal computers to have stored thereon libraries or databases of music files that may also supplement or be used in place of the online database. Moreover, in one embodiment a user may execute a program, stored on the user's computer rather than on a remote server, which can be used to access and compare recordings in accordance with the methods described herein. The listing in Fig. 1 shows that there are two performances (Le., recordings) for the performance of the work entitled 'Cantiga de Santa Maria 10, Rosa das rosas e Fror das frores' by the artist named Teresa Berganza. The first recording was released on April 12, 2005 and is Listed as being 2 minutes and 44 seconds in duration, while the second recording was released on August 14, 2007 and has the same duration. The two tracks may be compared by selecting the checkboxes above the comparison engine button labeled 'compare', which initiates a comparison engine, described below, with reference to various embodiments of the comparison engine.
[0015] Fig. 2 is a flow chart showing the steps in one embodiment of a method of comparing recordings in accordance with the present invention. At step 201 a listing of recordings is provided, such as the listing in the web page shown in Fig. 1. In step 202, a first recording is selected and the associated identification information of the recording is stored, as shown in step 203. The recording identification information can be used to recall the recording selection. In step
204, a second recording is selected and that selection is stored, as shown in step
205. While only two recordings are shown as being selected in Fig. 2. it is to be
understood that more than two recordings may be selected. The selection and storage of the recording identification information builds a comparison list of recordings. In one embodiment, the listing is provided in a web page and the comparison list can issue as a session cookie which is remembered while the user remains on the web site of the web page. Alternatively, the comparison list can be saved as a JavaScript variable which is also remembered while the user is still on the same page. Also, the comparison list can be submitted to a database (e.g., through an Ajax call) located on the server which is added to the user profile so that this data can be remembered across several sessions.
[0016} In step 206 the comparison engine can be invoked. As already mentioned above, the comparison engine may be invoked by clicking on the button labeled "compare" on the web page shown in Fig. 1. In step 207 the identification information of the selected recordings is recalled for the comparison. At step 208 the user can select one of the recordings selected to play in accordance with steps 209 and 210. In the case of audio recordings the selected recording can be output to the user aurally, such as via speakers or a headset. While each recording is played the user may use a set of manual controls to control the playback of the recording. For example, during playback the user may use a manual time control to stop, pause, slow, hasten, or move to a specific portion of the recording or otherwise delineate a listening window within the recording for later comparison to another recording. Likewise, the second and other recordings can be played like the first recording. At step 211 the played recordings or portions thereof can be compared aurally, such as by sequentially playing back those selections. [0017] Fig. 3 is a flow chart showing the steps of another embodiment of a method of comparing recordings in accordance with the present invention. As in the embodiment represented by Fig. 2, a first and second recording are selected in step 301 and the comparison engine is invoked at step 302. The recording information is recalled at step 303 and the first track is selected in step 304 and is played at step 305. During or after playback of the first recording, in step 306 the start time and end time of a portion of interest to the user, called a first portion for convenience, can be selected. For example, when the comparison engine is a web page a visual playback indicator, such as a slider bar, may be used to show the
length of the recording and the elapsed time during playback with a pointer moving along the bar. The user may be able to place markers on the playback bar to indicate the beginning and ending times of the first portion. These times of interest represented by the markers are stored in step 307. Further, these markers may be represented by Δls, which is the start time of the first portion measured from the beginning TIo or Tl0 of the recording, and ΔIE, which is the end time of the first portion measured from the beginning TIo or T2o of the recording. Alternatively, the sliders may be adjusted along the playback bar to delineate an adjusted beginning and ending of the first portion. Of course the first portion of the first recording may be selected in other ways, and those discussed above are not intended to limit the scope of the invention. The start and end times of the first portion can be referenced back to the beginning of the first recording or to another fixed point in the recording. The second recording is then selected at step 308 and the recording information is loaded at step 309. At step 310 the start and end times of the first portion are superimposed onto the second recording to create a second portion of interest, called the second portion for convenience. In step 310 the two portions are aurally compared Another recording to be compared can be selected at step 304 or the process may end at step 313. f0018] Fig. 3A is a time chart showing the relationship between the start and end times Δls and ΔIE on the first and second portions. Superimposing the start and end times as shown allows the user to isolate and compare portions of the two recordings at the same times measured from the beginning of the recordings. This may be useful where the two recordings are of similar duration and where the same portion of the song is expected at roughly the same time measured from the beginning of both recordings.
[00191 However, in some instances, because the first portion in the first recording may not be located at the same time in the second recording, superimposing the same start and end times on the second recording may not necessarily locate a portion of the second recording which is similar to that of the first recording. This is especially true where the two recordings have different overall lengths. For example, some classical recordings are of concerts which have periods of audience applause followed by a period of musical performance
Sometimes this applause becomes part of the recording that precedes the music. If such a recording is compared to an otherwise similar recording where such applause is omitted, the portions of interest of the two recordings will be shifted in time by an amount roughly equal to the time of the applause. Also, two recordings may have different tempos or may have been recorded at differing recording speeds, so that, not only is the time of the start of the second portion shifted, but also the length of the second recording (and therefore the second portion in the second recording) is also different. To account for such variations in the recordings a further embodiment of the method of comparing recordings is described below with reference to the flowchart shown in Fig. 4. [0020] In the method depicted in Fig. 4, as in Figs. 2 and 3, at least a first and a second recording are shown as selected and the identification information about the recordings are stored at step 401. At step 402 the comparison engine is invoked. Previously stored recording information is recalled in step 403 and the first track is selected in step 404 and is played in step 405. As in the method described with reference to Fig. 3, during or after playback of the first recording, the start time and end time of a portion of interest to the user can be selected at step 406 in a similar fashion. At step 408 the second recording is selected and the previously stored identification information about that recording is loaded at step 409. At step 410 the timing information from the selection made in step 406 is used to normalize the selection of a second portion of the second recording. The second portion can be normalized by applying a normalizing factor to the first portion of the first recording. For example, with reference to Fig. 5, the total length of the first recording Tln may be 120 seconds, while the total length of the second recording T2n may be 150 seconds. Because the two recordings are believed to be related, it is assumed that portions of the two recordings are related based on timing. Thus, in this example the normalization factor is taken to be equal to the ratio of the length of the second recording T2,, to the length of the first recording Tln (Le., 72JTln = 150/120 = 1.25). Therefore, it is assumed that that the first 1 second of playback of the first recording will be related to the first 1.25 seconds of playback of the second recording, and so on through the two recordings. If the start time of the first portion Δl s (measured from the beginning
of the first recording) is 40 seconds and the end time ΔIE is 80 seconds, then the normalizing factor applied to those times results in a second portion defined by a start time Δ2S of 50 seconds (1.25 x 40 seconds) and an end time Δ2E of 100 seconds (1.25 x 80 seconds), respectively, from the beginning of the second recording, for a total duration of the second portion of 50 seconds (T2JT\a x (ΔIE - Δls) = 1.25 x 40 seconds). As mentioned above, when the lengths of the two recordings differ, normalizing the second portion to the first portion may yield better results in isolating a portion of the second portion which is similar to the first portion, than if the start time and end time of the first portion were simply used for delineating the second portion. The first and second portions can be aurally compared at step 412, and another recording can then be selected at step 404 for comparison or the comparison can end at step 413. [0021] Another exemplary embodiment of the method of comparing a plurality of recordings in accordance with the present invention is shown with reference to the flowchart in Fig. 6. In Fig. 6, as in Figs. 2, 3, and 4, at least a first and a second recording are selected and the identification information about the recordings is stored at step 601. At step 602 the comparison engine is invoked. The recording information is recalled at step 603 and the first track is selected at step 604 and is played at step 605. As in the method described with reference to Figs. 3 and 4, during or after playback of the first recording the start time and end time of a portion of interest to the user can be selected at step 606 in a similar fashion to that described above with respect to Figs. 3 and 4. At step 608 the second recording is selected and the identification information about that recording is loaded at step 609. At step 610 the first portion selected is cross- correlated with the second recording to isolate a portion (Le. a second portion) of the second recording which is similar to the first portion. For example, in one embodiment, the first portion is translated into a wave-form, as shown in the time domain graphs of the first recording and first portion in Fig. 7. The comparison engine can search for a similar wave pattern in the second recording and identify a start and end time for the identified second portion. At step 612 the first portion and the second portion can be aurally compared. Another recording can then be selected at step 604 for comparison or the comparison can end at step 613.
[0022] Fig. 7 shows an example of how the second portion is identified in accordance with the embodiment of the method described above with respect to the flowchart in Fig. 6. In this example, the first recording is converted in step 700 to a first recording wave form 701, as shown depicted in a graph 702 showing the amplitude of the first recording wave form 701 versus elapsed time. At step 708 the second recording is converted to a second recording wave form 707. as shown depicted in graph 709 showing the amplitude of the second recording wave form 707 versus elapsed time.
[0023] At step 704 a first portion wave form 711 (shown in graph 705) is formed by modifying the first recording wave form 701. As shown graphically in graph 702, a first portion 703 of the first recording wave form 701 is selected having an elapsed start time TIs and an elapsed end time TIE- At step 704 a portion of the first recording wave form 701 before the elapsed start time Tls is discarded and the first portion 703 is translated or otherwise shifted to be at the beginning of the first portion wave form 711, as shown in graph 705. Further, at step 704, the length TIL of the first recording wave form 701 is adjusted to be equal to the length T2L of the second recording wave form 707 and the amplitude of the first portion wave form 711 after the first portion 703 is set equal to zero. [0024] At step 706 a fast Fourier transform of the first portion wave form 711, FFT(Tl), is generated and is equated with the discrete Fourier transform of the first portion wave form 711, Fx(Tl). At step 710 a fast Fourier transform of the second recording wave form 707, FFT(T2), is generated and is equated with the discrete Fourier transform of the second recording wave form 707, Fχ(T2). At step 712 the first portion wave form 711 and the second wave form 707 are cross- correlated in the frequency domain, by calculating the product of the conjugate of one of the respective discrete Fourier transforms (e.g., FFT(Tl)) with the other discrete Fourier transform (e.g., FFT(T2)). In step 714 the inverse fast Fourier transform of that product is calculated to generate a time domain correlation wave form 715, shown on graph 717 with the second wave form 707. [0025] As shown Ln graph 717, at step 716, the correlation wave form 715 is compared with the second recording wave form 707 in the time domain. A maximum amplitude of the correlation wave form 715 is located at a certain
correlation time T2c, elapsed from the beginning of the second recording wave form 707. The correlation time T2C indicates an elapsed time in the second recording at which the first portion wave form 711 is most highly correlated with the second recording wave form 707. That is, the elapsed time of the second recording wave form 707 at the location of the maximum amplitude, correlation time T2c, of the correlation wave form 715 is identified as the point in the second recording where first portion 703 is determined to be most likely to be located. The second portion 718 of the second recording can be identified as spanning a duration between a start time T2S, at a certain time before the correlation time T2c, and an end time T2E, at a certain time after than the correlation time T2c. The start time T2s and end time T2E can also be determined based on the correlation time T2c, such as by calculating a predetermined offset duration before or after the correlation time T2c- Such offset durations before or after the correlation time T2c can be equal and can also be based on the duration of the first portion 703. For example, in one embodiment, the offset duration before or after the correlation time T2c can be equal to a percentage of the duration of the first portion 703, such as 50% or 100%. The second portion 718 and the first portion 703 can then be aurally, or otherwise, compared by a listener or by an apparatus.
[0026] Accordingly, the present invention provides a music listener or purchaser with a method for easily choosing one of among a number of different renditions of the same piece, according to his or her preference. The choice may be made without requiring the listener or purchaser to hear the entirety of the respective renditions of the piece. Therefore, the time of the listener or purchaser is used efficiently. Further, while the preferred embodiments have been described with respect to comparison of different renditions of musical compositions, the present invention may be applied to comparing any type of recording, such as a theatrical production or poetry reading that is rendered or performed in multiple forms. [0027] While the invention has been described with reference to specific preferred embodiments, it is to be understood that deviations from such embodiments may be possible without deviating from the scope of the invention. Moreover, the methods described herein may embodied in a computer program
stored on a computer-readable medium and the methods may be executed by a processor capable of executing the program stored on the computer-readable medium. The computer readable medium may also be a computer-readable program product. The computer program may be executed by a processor of a server computer which is connected to a client computer via a network, such as a LAN/WAN, wired/wireless network, and the Internet. Moreover, the computer program may be similarly executed by a processor of a client computer. Such client computer may have access to a database of stored recordings and/or may be configured to be in communication via a network to one or more computers which have recordings stored thereon.
Claims
1. A method of comparing a plurality of recordings, including providing a list of a plurality of recordings; selecting at least a first recording and a second recording from the plurality of recordings, wherein the first recording and the second recording have predetermined lengths; selecting a first portion of the first recording; identifying a second portion of the second recording based on the first portion; and comparing the first portion to the second portion
2. The method of Claim 1, further including aurally playing the first recording.
3. The method of Claim 1 , further including storing the selections of the first and second recordings selected and storing the first portion of first recording selected.
4. The method of Claim 1, wherein the list is provided on a web page and wherein the plurality of recordings are stored in a database which can be accessed by selecting the first and second recordings from the web page providing the list.
5. The method of Claim 1, wherein the recording is an audio recording.
6. The method of Claim 1, wherein the comparing step includes aurally comparing the first portion to the second portion
7. The method of Claim 1, wherein when the entire length of the recording is selected as the first portion in the first portion selecting step, the entire length of the second recording is identified as the second portion in the identifying step.
8. The method of Claim 1, wherein the step of selecting the first portion of the first recording includes selecting the start time and the end time of the portion of the first recording thereby to define the first pornon
9. The method of Claim 8, wherein the start time and the end time selecting step is performed by selecting the start time and the end time with reference to a fixed reference time in the recording.
10. The method of Claim 9. wherein the fixed reference time is the start of the recording.
11. The method of Claim 8, wherein the step of identifying a second portion includes normalizing the second portion to the first portion.
12. The method of Claim 11, wherein the step of normalizing the second portion includes: generating a normalization factor; and applying the normalization factor to the start time and end time of the first portion.
13. The method of Claim 12, wherein the normalization factor generating step generates the normalization factor based on the respective lengths of the first recording and the second recording.
14. The method of Claim 13, wherein the normalization factor generating step generates the normalization factor as a ratio of the length of the second recording to the length of the first recording.
15. The method of Claim 12, wherein the step of applying the normalization factor delineates a second portion having a normalized start time and normalized end time.
16. The method of Claim 15, wherein the normalized start time is equal to the product of the normalization factor and the start time of the first portioa
17. The method of Claim 16, wherein the normalized end time is equal to the product of the normalization factor and the end time of the first portion.
18. The method of Claim 8, wherein in the step of selecting the first portion of the first recording, at least the first portion is translated into a first portion wave form ; and identifying the second portion further includes: translating the second recording into a second wave form, and comparing the first portion wave form with the second wave form
19. The method of Claim 18, wherein comparing the first portion wave form with the second wave form includes generating a cross-correlation wave form (R-rm) according to the relation.
Rτiτ2 =PFT'1 (FFT(Tl) ■ FFT(T2)), where FFT(Tl) • FFT(T2) represents a product of a conjugate of a fast Fourier transform of the first portion wave form and a fast Fourier transform of the second wave form, and where FFT"1 is an inverse fast Fourier transform of the product.
20. The method of Claim 19, wherein in the step of identifying the second portion, comparing the translated wave form with the similar wave form includes identifying an elapsed correlation time measured from a beginning of the second wave form where the cross-correlation wave form (Rπrc) is a maximum.
21. The method of Claim 20, wherein the step of identifying the second portion further includes identifying a start time of the second portion as at least a certain elapsed time prior to the identified elapsed correlation time and identifying an end time of the second portion as at least another certain elapsed time after the elapsed correlation time.
22. The method of Claim 1, wherein the step of comparing includes aurally comparing the first portion and the second portion.
23. A computer-readable program product comprising a computer-usable medium having control logic stored therein for causing a computer to enable a user to compare a plurality of recordings, the control logic comprising' first computer-readable program code for causing the computer to provide a list of a plurality of recordings. second computer-readable program code for causing the computer to select at least a first recording and a second recording from the plurality of recordings, wherein the first recording and the second recording have predetermined lengths; third computer-readable program code for causing the computer to select a first portion of the first recording; fourth computer-readable program code for causing the computer to identify a second portion of the second recording based on the first portion; and fifth computer-readable program code for causing the computer to compare the first portion to the second portion.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/256,208 US7994410B2 (en) | 2008-10-22 | 2008-10-22 | Music recording comparison engine |
US12/256,208 | 2008-10-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010048025A1 true WO2010048025A1 (en) | 2010-04-29 |
Family
ID=42118239
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2009/060831 WO2010048025A1 (en) | 2008-10-22 | 2009-10-15 | Music recording comparison engine |
Country Status (2)
Country | Link |
---|---|
US (1) | US7994410B2 (en) |
WO (1) | WO2010048025A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7994410B2 (en) * | 2008-10-22 | 2011-08-09 | Classical Archives, LLC | Music recording comparison engine |
CN106484891A (en) * | 2016-10-18 | 2017-03-08 | 网易(杭州)网络有限公司 | Game video-recording and playback data retrieval method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030023421A1 (en) * | 1999-08-07 | 2003-01-30 | Sibelius Software, Ltd. | Music database searching |
US6528715B1 (en) * | 2001-10-31 | 2003-03-04 | Hewlett-Packard Company | Music search by interactive graphical specification with audio feedback |
US20070113724A1 (en) * | 2005-11-24 | 2007-05-24 | Samsung Electronics Co., Ltd. | Method, medium, and system summarizing music content |
US20080072741A1 (en) * | 2006-09-27 | 2008-03-27 | Ellis Daniel P | Methods and Systems for Identifying Similar Songs |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001042866A (en) * | 1999-05-21 | 2001-02-16 | Yamaha Corp | Contents provision method via network and system therefor |
US6539395B1 (en) * | 2000-03-22 | 2003-03-25 | Mood Logic, Inc. | Method for creating a database for comparing music |
US6995309B2 (en) * | 2001-12-06 | 2006-02-07 | Hewlett-Packard Development Company, L.P. | System and method for music identification |
US20030135377A1 (en) * | 2002-01-11 | 2003-07-17 | Shai Kurianski | Method for detecting frequency in an audio signal |
US6933432B2 (en) * | 2002-03-28 | 2005-08-23 | Koninklijke Philips Electronics N.V. | Media player with “DJ” mode |
US7461392B2 (en) * | 2002-07-01 | 2008-12-02 | Microsoft Corporation | System and method for identifying and segmenting repeating media objects embedded in a stream |
US6967275B2 (en) * | 2002-06-25 | 2005-11-22 | Irobot Corporation | Song-matching system and method |
US7082394B2 (en) * | 2002-06-25 | 2006-07-25 | Microsoft Corporation | Noise-robust feature extraction using multi-layer principal component analysis |
JP3960151B2 (en) * | 2002-07-09 | 2007-08-15 | ソニー株式会社 | Similar time series detection method and apparatus, and program |
US7081579B2 (en) * | 2002-10-03 | 2006-07-25 | Polyphonic Human Media Interface, S.L. | Method and system for music recommendation |
US20040260682A1 (en) * | 2003-06-19 | 2004-12-23 | Microsoft Corporation | System and method for identifying content and managing information corresponding to objects in a signal |
US7788696B2 (en) * | 2003-10-15 | 2010-08-31 | Microsoft Corporation | Inferring information about media stream objects |
US7421305B2 (en) * | 2003-10-24 | 2008-09-02 | Microsoft Corporation | Audio duplicate detector |
EP1768102B1 (en) * | 2004-07-09 | 2011-03-02 | Nippon Telegraph And Telephone Corporation | Sound signal detection system and image signal detection system |
US20060156343A1 (en) * | 2005-01-07 | 2006-07-13 | Edward Jordan | Method and system for media and similar downloading |
WO2007045797A1 (en) * | 2005-10-20 | 2007-04-26 | France Telecom | Method, program and device for describing a music file, method and program for comparing two music files with one another, and server and terminal for carrying out these methods |
JP5145939B2 (en) * | 2005-12-08 | 2013-02-20 | 日本電気株式会社 | Section automatic extraction system, section automatic extraction method and section automatic extraction program for extracting sections in music |
KR100717387B1 (en) * | 2006-01-26 | 2007-05-11 | 삼성전자주식회사 | Method and apparatus for searching similar music |
KR100749045B1 (en) * | 2006-01-26 | 2007-08-13 | 삼성전자주식회사 | Method and apparatus for searching similar music using summary of music content |
WO2007133754A2 (en) * | 2006-05-12 | 2007-11-22 | Owl Multimedia, Inc. | Method and system for music information retrieval |
WO2008030197A1 (en) * | 2006-09-07 | 2008-03-13 | Agency For Science, Technology And Research | Apparatus and methods for music signal analysis |
US8510301B2 (en) * | 2006-12-14 | 2013-08-13 | Qnx Software Systems Limited | System for selecting a media file for playback from multiple files having substantially similar media content |
CN101226526A (en) * | 2007-01-17 | 2008-07-23 | 上海怡得网络有限公司 | Method for searching music based on musical segment information inquest |
US7849092B2 (en) * | 2007-08-13 | 2010-12-07 | Yahoo! Inc. | System and method for identifying similar media objects |
US8407230B2 (en) * | 2007-08-13 | 2013-03-26 | Yahoo! Inc. | System and method for identifying similar media objects |
US20090055376A1 (en) * | 2007-08-21 | 2009-02-26 | Yahoo! Inc. | System and method for identifying similar media objects |
US20100023328A1 (en) * | 2008-07-28 | 2010-01-28 | Griffin Jr Paul P | Audio Recognition System |
US7994410B2 (en) * | 2008-10-22 | 2011-08-09 | Classical Archives, LLC | Music recording comparison engine |
-
2008
- 2008-10-22 US US12/256,208 patent/US7994410B2/en not_active Expired - Fee Related
-
2009
- 2009-10-15 WO PCT/US2009/060831 patent/WO2010048025A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030023421A1 (en) * | 1999-08-07 | 2003-01-30 | Sibelius Software, Ltd. | Music database searching |
US6528715B1 (en) * | 2001-10-31 | 2003-03-04 | Hewlett-Packard Company | Music search by interactive graphical specification with audio feedback |
US20070113724A1 (en) * | 2005-11-24 | 2007-05-24 | Samsung Electronics Co., Ltd. | Method, medium, and system summarizing music content |
US20080072741A1 (en) * | 2006-09-27 | 2008-03-27 | Ellis Daniel P | Methods and Systems for Identifying Similar Songs |
Also Published As
Publication number | Publication date |
---|---|
US7994410B2 (en) | 2011-08-09 |
US20100106267A1 (en) | 2010-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6748360B2 (en) | System for selling a product utilizing audio content identification | |
US6910035B2 (en) | System and methods for providing automatic classification of media entities according to consonance properties | |
US7065416B2 (en) | System and methods for providing automatic classification of media entities according to melodic movement properties | |
US7532943B2 (en) | System and methods for providing automatic classification of media entities according to sonic properties | |
US6657117B2 (en) | System and methods for providing automatic classification of media entities according to tempo properties | |
US7035873B2 (en) | System and methods for providing adaptive media property classification | |
US20070276733A1 (en) | Method and system for music information retrieval | |
US20060083119A1 (en) | Scalable system and method for predicting hit music preferences for an individual | |
US20070282860A1 (en) | Method and system for music information retrieval | |
US20120331386A1 (en) | System and method for providing acoustic analysis data | |
JP2005521979A (en) | Media player with “DJ” mode | |
EP1938325A2 (en) | Method and apparatus for processing audio for playback | |
KR100754294B1 (en) | Feature-based audio content identification | |
JP2006048319A (en) | Device, method, recording medium, and program for information processing | |
US7994410B2 (en) | Music recording comparison engine | |
WO2003091899A2 (en) | Apparatus and method for identifying audio | |
KR20050003457A (en) | Signal processing method and arrangement | |
Lin et al. | Bridging music via sound effects | |
Plazak | Transpositions Within User-Posted YouTube Lyric Videos: A Corpus Study | |
WO2011073449A1 (en) | Apparatus and method for processing audio data | |
WO2007133760A2 (en) | Method and system for music information retrieval | |
Herrera et al. | Jaume Parera Bonmati | |
WO2007132285A1 (en) | A customizable user interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09822457 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09822457 Country of ref document: EP Kind code of ref document: A1 |