I participated in the 1st OUTTA AI Bootcamp and carried out the final project required for the completion of the bootcamp. After inputting a phrase from everyday life or a movie line that one wants to act, when the person performs the facial expression acting looking at the webcam, the score of the facial expression acting is displayed on the screen.
- The implementation is based on BERT and VGGNet. And OpenCV is used.
- Datasets used: FER2013, Korean SNS Conversation Dataset which can be downloaded from the AI HUB site