Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Multimodal mood classification of Hindi and Western songs
Patra B., Das D., Bandyopadhyay S.  Journal of Intelligent Information Systems 51 (3): 579-596, 2018. Type: Article
Date Reviewed: Feb 4 2019

Undoubtedly, music mood classification is a fascinating area for music researchers, with people beginning to build music libraries based on moods rather than other classifiers such as artist, genre, and so on. Given that the aforesaid area in the context of Indian music is relatively new compared to that in Western music, Patra et al. propose an interesting mood taxonomy for the music mood classification of Hindi and Western songs.

The proposed taxonomy has five classes:

(1) Class_Ex: Excited, Astonished, Aroused;
(2) Class_Ha: Happy, Delighted, Pleased;
(3) Class_Ca: Calm, Relaxed, Satisfied;
(4) Class_Sa: Sad, Gloomy, Depressed; and
(5) Class_An: Angry, Alarmed, Tensed.

The logic behind such a fivefold classification is “the significant invariability among the audio features of the subclasses with respect to its corresponding mood class. For example, a happy and a delighted song have high valence, whereas an aroused and an excited song have high arousal.”

The next step is to annotate the audio and lyrics of Hindi and Western songs using the proposed mood taxonomy. However, it so happens that for some Hindi songs, the mood depicted by the lyrics contradicts the mood detected by listening. Therefore, the authors adopt a correlation-based feature selection technique to identify the important audio and lyric features, and implement feed-forward neural networks (FFNNs) to develop mood classification systems.

The authors successfully develop “several mood classification systems ... for [both] Hindi and Western songs” based on audio and lyric features as well as their combination. The FFNNs “for Hindi and Western songs obtained the maximum F-measures of 0.751 and 0.835, respectively.”

The paper is interesting, has some useful references, and will definitely draw interest from music researchers and students, music enthusiasts, musicians, and musicologists.

My personal view is that songs fall under composite art in which it is not simply the lyrics and the tune that are crucial, but their interaction; the left and right hemispheres of the brain can and do perform such an interactive processing of speech and music. The strength of the paper thus lies in considering such an interaction (“combination”) in the study.

Reviewer:  Soubhik Chakraborty Review #: CR146412 (1905-0185)
Bookmark and Share
  Featured Reviewer  
 
Sound And Music Computing (H.5.5 )
 
Would you recommend this review?
yes
no
Other reviews under "Sound And Music Computing": Date
Environmental audio scene and sound event recognition for autonomous surveillance: a survey and comparative studies
Chandrakala S., Jayalakshmi S.  ACM Computing Surveys 52(3): 1-34, 2019. Type: Article
Nov 11 2021
Basic music technology: an introduction
Mazzola G., Pang Y., Heinze W., Gkoudina K., Pujakusuma G., Grunklee J., Chen Z., Hu T., Ma Y.,  Springer International Publishing, New York, NY, 2018. 200 pp. Type: Book (978-3-030009-81-6)
Aug 30 2019
Sound reinforcement engineering: fundamentals and practice
Ahnert W., Steffen F.,  CRC Press, Inc., Boca Raton, FL, 2017. 424 pp. Type: Book (978-1-138569-74-4)
Apr 23 2019
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright © 2000-2021 ThinkLoud, Inc.
Terms of Use
| Privacy Policy