講演・口頭発表等

国際会議
2021年7月28日

Perceptual vs. automated judgements of music copyright infringement

ICMPC16-ESCOM11: The 16th International Conference on Music Perception and Cognition / The 11th Triennial Conference of ESCOM
  • Yuchen Y
  • ,
  • Cronin C
  • ,
  • Müllensiefen D
  • ,
  • Fujii S
  • ,
  • Savage P. E

開催年月日
2021年7月28日 - 2021年7月28日
記述言語
英語
会議種別
ポスター発表

Inappropriate music copyright lawsuits inhibit music creativity and waste millions of taxpayer dollars annually, but there are few objective guidelines for applying copyright law in claims involving musical works. Recent research has proposed objective algorithms to automatically calculate musical similarity, but there remains almost no relevant perceptual data.
Aims
Our study aims at supporting the legal system to make decisions on music copyright lawsuits more efficiently and accurately. Current proposed automated algorithms for music similarity reduce subjectivity in music copyright decisions but have not been tested against perceptual data. Thus, we collected both perceptual and automated data to help determine objective standards for how much copying is required to be considered as substantial similarity.
Methods
In previous study (Yuan et al., 2020), we chose 17 adjudicated copyright cases whose main copyright issue focused on substantial similarity of melodies. To improve it, we expanded our database to include a larger and more diverse sample of 46 cases from the Music Copyright Infringement Resource (Cronin, 2020), including cases focused on similarity in timbre, lyrics, etc. as well as melodic similarity. We will conduct a perceptual experiment to collect 20 participants' perceptual judgements on music similarity of the 46 copyright cases and compare the results with the similarity levels calculated by 2 automated algorithms which were designed to calculate melodic and audio similarity, respectively. We will apply Percent Melodic Identity method which was developed to quantify melodic evolution based on automatic sequence alignment algorithms to measure melodic similarity (Savage et al., 2015). We plan to test CQTNet model, which used Convolutional Neural Network architecture for audio cover song identification (Yu et al., 2020), for its ability to evaluate audio similarity and music copyright infringement.
Results
Our previous application found the 20 participants’ judgement of infringement matched the official decisions in between 49-58% of the 17 selected cases in all three conditions (given full-audio, melody-only, or lyrics-only versions). Both automated algorithms designed for melodic and audio similarity respectively were able to match past decisions with identical accuracy of 71% (12/17 cases). The updated study with larger sample of 46 cases which were selected with strictly-designed criteria would probably reduce the failures in automatic analysis caused by non-musical aspects. We also expect the newly-introduced CQTNet model could show better performance in music similarity.
References
Cronin, C. (2020). Music Copyright Infringement Resource, http://mcir.usc.edu.
Savage, P. E., & Atkinson, Q. D. (2015). Automatic tune family identification by musical sequence alignment. ISMIR. Yu, Z., Xu, X., Chen, X., & Yang, D. (2020). Learning a Representation for Cover Song Identification Using Convolutional Neural Network. ICASSP.
Yuan, Y., Oishi, S., Cronin, C., Müllensiefen, D., Atkinson, Q. D., Fujii, S., & Savage, P. E. (2020). Perceptual vs. automated judgements of music copyright infringement. ISMIR.

リンク情報
URL
https://sites.google.com/sheffield.ac.uk/escom2021/home