A Markerless Computer Vision Approach For Continuous Quantification of Internal States and Affective Behaviors in Clinical Settings
Table of Contents
FaceDx is a fully integrated computer vision workflow to analyze internal states, affect, pain, and short-term behaviors in the clinical setting. Importantly, our approach is markerless and does not require any fine-tuning, meaning FaceDx can automatically process any videos in which the given subject's face is visible. We bring together pre-trained, open source models that output facial action units (AUs) and emotions on a frame-by-frame basis.
FaceDx has been shown to accurately decode self-reported long-term mood scores, as well as short-term behaviors such as smiles, frowns, or neutral expressions. Importantly, we are one of the first to conduct this validation in a clinical setting without the need for significant human intervention and analysis.
We use a combination of the most widely employed deep learning and computer vision libraries for our custom clinical monitoring pipeline.
- PyTorch: MTCNN, OpenGraphAU
- TensorFlow: HSEmotion, DeepFace & Partial Verify,
- OpenCV: Video Processing, Intermediate Image Saving, Visualizer
TODO
TODO
TODO
TODO
TODO
Yuhao "Danny" Huang, MD - @YuhaoHuangMD - [email protected]
Jay Gopal - @JayRGopal - [email protected] & [email protected]
Corey Keller, MD, PhD - @DrCoreyKeller - [email protected]
Project Link: https://github.com/JayRGopal/FaceEmotionDetection