Paper Title
Emotion and Personality Recognition

Abstract
Our goal is to create a model that can analyze sentiment in real-time or recognize an individual’s emotions using a text, audio, and visual user interface. We will primarily use Deep Learning-based methodologies, like natural language processing, to evaluate facial, vocal, and textual emotions. We are investigating cutting-edge models in this project for our Emotion Recognition Simulator. Our approach involves examining text, sound, and video inputs to construct a comprehensive ensemble model. The model will combine data from all these sources and present it in an easily understandable manner. Initially, we will develop a distinct model to identify various characteristics that define a particular individual's mood. Text inquiries, live voice frequency, and real-time video facial expressions are a few of the factors we’ll consider. These factors will be utilized to identify an individual’s emotions or personality traits. To identify emotions in speech, we rely on the Ryerson audio-visual database of emotional voice and song as our dataset. We intend to employ a pretrained model to train our dataset for voice emotion recognition. Next, we’ll use text inputs to determine a person’s emotional state. We are making use of information from research conducted by Pennebaker and King. The study involves 2,468 daily written submissions from 34 psychology students over ten consecutive days. Every student submitted daily writing assignments. To ascertain the personality scores of the students, we utilized a self-report form (The Big Five Inventory (BFI)), consisting of 44 items that allocate scores for five personality traits. Following the extraction of characteristics from the dataset using preprocessing techniques, we aim to train the dataset using a pre-existing model. A person will review their emotions in paragraphs or by responding to questions. After identifying emotions using speech and writing, we’ll move on to real-time facial expression analysis. To identify emotions, the Kaggle FER2013 emotion recognition dataset was used as the training set to discover the model that will best aid in identifying a person’s emotion from their facial expressions. We will apply a variety of pre-trained models to the dataset. Additionally, we will develop pickle files for each model in the future and deploy them on the frontend and backend to produce a fully functional simulator program that will be advantageous to society.