Mood-Based Music Recommendation System using Facial and Voice Emotion Recognition
Mansi D. AgrawaL,student
Cynthia Shinde
Professor Department of Information Technology S. D. S. M. College, Palghar, Maharashtra, India
Abstract :
Music has a strong emotional bond with people and has a big impact on mood and general wellbeing. Conventional recommendation systems, like those seen in wellknown streaming services, mostly rely on playlists, user history, or cooperative filtering methods. Although somewhat successful, these methods frequently overlook the user's current emotional state, resulting in recommendations that aren't appropriate. In order to improve music suggestion personalization, this study presents a novel framework that makes use of deep learning, artificial intelligence, and multimodal emotion identification. Convolutional and recurrent neural models are used by the system to evaluate real-time input from vocal signals and facial expressions, classifying the user's mood into categories like happy, sad, neutral, or furious. By integrating multimodal emotion recognition with recommendation systems, this project ensures that music suggestions are not only history-driven but also emotion-driven, improving user engagement, satisfaction, and well-being. Potential applications extend beyond entertainment to areas such as mental health therapy, personalized learning, and adaptive human-computer interaction.
Keywords: Music, mood, personalized playlist, music recommendation, integration, AI, Deep learning


