Few things stimulate the brain the way music does. Research shows that the music an individual wants to listen to doesn’t just depend on his/her music preferences of artists and genres, but also highly on his/her present mood. Taking the mood into account, the music being listened to can help one feel a lot better.

However, the present music recommender systems have all overlooked the importance of analyzing user’s mood before putting their song suggestions forward. Hence, the user has to spend substantial amount of time choosing between songs until finally settling down.

Thus, there are two discrete problems that need to be solved in the space: recommending the “right set of songs” from the beginning so that the user has to spend less time choosing them and more time enjoying listening to them, and recommending the “right enough set of songs” so that the songs can always make the user feel better.

BestRec is solving these two problems by recommending songs based on mood as well as music preferences.

While the product directly asks users about their music preferences, sentiment analysis decides the mood by the answer to a question that the product carefully asks the user every time he/she uses it, based on research. The user is also asked the topics he/she wants to listen to, for example: love, peace, joy, future, etc. The user can make multiple submissions for each question.

Combining all these factors and using content-based recommendation system algorithms and natural language processing algorithms for lemmatization, sentiment and entity analysis, the system finally recommends twenty set of songs from a dataset of different artists and suitable moods and the links to the sites where these songs can be listened to.