
MediaPipe Face Mesh Ploting
Face mesh detection, also known as facial landmark detection or face pose estimation, is the task of identifying and localizing specific keypoints or landmarks on a human face. It involves detecting the positions of facial features, such as eyes, eyebrows, nose, mouth, and jawline, in an image or video.
MediaPipe Face Mesh Plotting
Summary
Introduction
By utilizing the MediaPipe library, Face Mesh offers robust and efficient facial landmark tracking, allowing developers to extract detailed information about facial expressions, pose, and movements. It accurately maps a dense set of 3D facial landmarks onto the face, enabling applications such as facial animation, augmented reality (AR) effects, avatar customization, and more.
With its powerful features and ease of integration, Face Mesh with MediaPipe empowers developers to create innovative applications that leverage facial tracking and analysis, revolutionizing the way we interact with technology and unlocking new avenues for creative expression.
Parameters
Inputs
- input - (image -.png|.jpg|.jpeg):The input to the model is an image that contains a face.
Output
- output - (image -.png): The output of the model is an image that has been processed by the model to identify and mark the facial landmarks. The model then draws a mesh over the face.
Examples
input | output |
---|---|
![]() | ![]() |
Usage for developers
Please find below the details to track the information and access the code for processing the model on our platform.
Requirements
torch
mediapipe
opencv-python
Code based on AIOZ structure
import mediapipe as mp
import cv2
import os, torch
...
def do_ai_task(
input: Union[str, Path],
model_storage_directory: Union[str, Path],
device: Literal["cpu", "cuda", "gpu"] = "cpu",
*args, **kwargs) -> Any:
"""Define AI task: load model, pre-process, post-process, etc ..."""
# Define AI task workflow. Below is an example
mp_face_mesh = mp.solutions.face_mesh
mp_drawing = mp.solutions.drawing_utils
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1)
output_path = "output.png"
with mp_face_mesh.FaceMesh(
static_image_mode=True,
max_num_faces=2,
min_detection_confidence=0.5) as face_mesh:
# Convert the BGR image to RGB and process it with MediaPipe Face Mesh.
input_image = cv2.imread(input)
with torch.no_grad():
results = face_mesh.process(cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB))
annotated_image = input_image.copy()
for face_landmarks in results.multi_face_landmarks:
mp_drawing.draw_landmarks(
image=annotated_image,
landmark_list=face_landmarks,
connections=mp_face_mesh.FACEMESH_TESSELATION,
landmark_drawing_spec=drawing_spec,
connection_drawing_spec=drawing_spec)
multi_face_landmarks = results.multi_face_landmarks
cv2.imwrite(output_path, annotated_image)
output_image = open(output_path, "rb") # io.BufferedReader
return output_image, str(multi_face_landmarks)
Reference
This repository is based on and inspired by Hysts's work. We sincerely appreciate their generosity in sharing the code.
License
We respect and comply with the terms of the author's license cited in the Reference section.