
FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. FLAME combines a linear identity shape space (trained from head scans of 3800 subjects) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. For details please see the scientific publication.
We aim at keeping the list up to date. Please feel free to add missing FLAME-based ressources (publications, code repositories, datasets) either in the discussions or in a pull request.
Code
List of public repositories that use FLAME (alphabetical order).
- BFM_to_FLAME: Conversion from Basel Face Model (BFM) to FLAME.
- DECA: Reconstruction of 3D faces with animatable facial expression detail from a single image.
- diffusion-rig: Personalized model to edit facial expressions, head pose, and lighting in portrait images.
- EMOCA: Reconstruction of emotional 3D faces from a single image.
- expgan: Face image generation with expression control.
- FaceFormer: Speech-driven facial animation of meshes in FLAME mesh topology.
- FLAME-Blender-Add-on: FLAME Blender Add-on.
- flame-fitting: Fitting of FLAME to scans.
- FLAME_PyTorch: FLAME PyTorch layer.
- GIF: Generating face images with FLAME parameter control.
- INSTA: Volumetric head avatars from videos in less than 10 minutes.
- INSTA-pytorch: Volumetric head avatars from videos in less than 10 minutes (PyTorch).
- learning2listen: Modeling interactional communication in dyadic conversations.
- MICA: Reconstruction of metrically accurated 3D faces from a single image.
- metrical-tracker: Metrical face tracker for monocular videos.
- NED: Facial expression of emotion manipulation in videos.
- Next3D: 3D generative model with FLAME parameter control.
- neural-head-avatars: Building a neural head avatar from video sequences.
- photometric_optimization: Fitting of FLAME to images using differentiable rendering.
- RingNet: Reconstruction of 3D faces from a single image.
- ROME: Creation of personalized avatar from a single image.
- SAFA: Animation of face images.
- SPECTRE: Speech-aware 3D face reconstruction from images.
- TRUST: Racially unbiased skin tone extimation from images.
- TF_FLAME: Fit FLAME to 2D/3D landmarks, FLAME meshes, or sample textured meshes.
- video-head-tracker: Track 3D heads in video sequences.
- VOCA: Speech-driven facial animation of meshes in FLAME mesh topology.
Datasets
List of datasets with meshes in FLAME topology.
- BP4D+: 127 subjects, one neutral expression mesh each.
- CoMA dataset: 12 subjects, 12 extreme dynamic expressions each.
- D3DFACS: 10 subjects, 519 dynamic expressions in total.
- FaceWarehouse: 150 subjects, one neutral expression mesh each.
- FaMoS: 95 subjects, 28 dynamic expressions and head poses each, about 600K frames in total.
- Florence 2D/3D: 53 subjects, one neutral expression mesh each.
- FRGC: 531 subjects, one neutral expression mesh each.
- LYHM: 1216 subjects, one neutral expression mesh each.
- Stirling: 133 subjects, one neutral expression mesh each.
- VOCASET: 12 subjects, 40 speech sequences each with synchronized audio.
Publications
List of FLAME-based scientific publications.
2023
- Learning Personalized High Quality Volumetric Head Avatars
- NeRFlame: FLAME-based conditioning of NeRF for 3D face rendering
- Text2Face: A Multi-Modal 3D Face Model
- ClipFace: Text-guided Editing of Textured 3D Morphable Models
- Expressive Speech-driven Facial Animation with controllable emotions
- Imitator: Personalized Speech-driven 3D Facial Animation
- DiffusionRig: Learning Personalized Priors for Facial Appearance Editing (CVPR 2023)
- High-Res Facial Appearance Capture from Polarized Smartphone Images (CVPR 2023)
- Instant Volumetric Head Avatars (CVPR 2023)
- PointAvatar: Deformable Point-based Head Avatars from Videos (CVPR 2023)
- Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars (CVPR 2023)
- Scaling Neural Face Synthesis to High FPS and Low Latency by Neural Caching (WACV 2023)
2022
- TeleViewDemo: Experience the Future of 3D Teleconferencing (SIGGRAPH Asia 2022)
- Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos
- Realistic One-shot Mesh-based Head Avatars (ECCV 2022)
- Towards Metrical Reconstruction of Human Faces (ECCV 2022)
- Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation (ECCV 2022)
- Generative Neural Articulated Radiance Fields (NeurIPS 2022)
- Neural Emotion Director: Speech-preserving semantic control of facial expressions in “in-the-wild” videos (CVPR 2022)
- RigNeRF: Fully Controllable Neural 3D Portraits (CVPR 2022)
- I M Avatar: Implicit Morphable Head Avatars from Videos (CVPR 2022)
- Neural head avatars from monocular RGB videos (CVPR 2022)
- Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)
- Simulated Adversarial Testing of Face Recognition Models (CVPR 2022)
- EMOCA: Emotion Driven Monocular Face Capture and Animation (CVPR 2022)
- Generating Diverse 3D Reconstructions from a Single Occluded Face Image (CVPR 2022)
- Accurate 3D Hand Pose Estimation for Whole-Body 3D Human Mesh Estimation (CVPR-W 2022)
- MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation (AAAI 2022)
- Exp-GAN: 3D-Aware Facial Image Generation with Expression Control (ACCV 2022)
2021
- Data-Driven 3D Neck Modeling and Animation (TVCG 2021)
- MorphGAN: One-Shot Face Synthesis GAN for Detecting Recognition Bias (BMVC 2021)
- SIDER : Single-Image Neural Optimization for Facial Geometric Detail Recovery (3DV 2021)
- SAFA: Structure Aware Face Animation (3DV 2021)
- Learning an Animatable Detailed 3D Face Model from In-The-Wild Images (SIGGRAPH 2021)
- Monocular Expressive Body Regression through Body-Driven Attention (ECCV 2020)
2020
2019
About
Summary of publicly available ressources such as code, datasets, and scientific papers for the FLAME 3D head modelflame.is.tue.mpg.de/
Topics
faceface-modelsflame3d-modelsstatistical-models3d-meshmorphable-modelfacial-expressionflame-modelhead-model