information Article Bimodal Emotion Recognition Model for Minnan Songs Zhenglong Xiang 1 , Xialei Dong 1, Yuanxiang Li 1,2, Fei Yu 2,*, Xing Xu 2,3 and Hongrun Wu 2,* 1 School of Computer Science, Wuhan University, Wuhan 430072, China;
[email protected] (Z.X.);
[email protected] (X.D.);
[email protected] (Y.L.) 2 School of Physics and Information Engineering, Minnan Normal University, Zhangzhou 363000, China;
[email protected] 3 College of Information and Engineering, Jingdezhen Ceramic Institute, Jingdezhen 333000, China * Correspondence:
[email protected] (F.Y.);
[email protected] (H.W.) Received: 29 January 2020; Accepted: 2 March 2020; Published: 4 March 2020 Abstract: Most of the existing research papers study the emotion recognition of Minnan songs from the perspectives of music analysis theory and music appreciation. However, these investigations do not explore any possibility of carrying out an automatic emotion recognition of Minnan songs. In this paper, we propose a model that consists of four main modules to classify the emotion of Minnan songs by using the bimodal data—song lyrics and audio. In the proposed model, an attention-based Long Short-Term Memory (LSTM) neural network is applied to extract lyrical features, and a Convolutional Neural Network (CNN) is used to extract the audio features from the spectrum. Then, two kinds of extracted features are concatenated by multimodal compact bilinear pooling, and finally, the concatenated features are input to the classifying module to determine the song emotion. We designed three experiment groups to investigate the classifying performance of combinations of the four main parts, the comparisons of proposed model with the current approaches and the influence of a few key parameters on the performance of emotion recognition.