
MediaPipe Face Mesh Ploting
Face mesh detection, also known as facial landmark detection or face pose estimation, is the task of identifying and localizing specific keypoints or landmarks on a human face. It involves detecting the positions of facial features, such as eyes, eyebrows, nose, mouth, and jawline, in an image or video.
MediaPipe Face Mesh Plotting
Summary
Introduction
By utilizing the MediaPipe library, Face Mesh offers robust and efficient facial landmark tracking, allowing developers to extract detailed information about facial expressions, pose, and movements. It accurately maps a dense set of 3D facial landmarks onto the face, enabling applications such as facial animation, augmented reality (AR) effects, avatar customization, and more.
With its powerful features and ease of integration, Face Mesh with MediaPipe empowers developers to create innovative applications that leverage facial tracking and analysis, revolutionizing the way we interact with technology and unlocking new avenues for creative expression.
Parameters
Inputs
- input - (image -.png|.jpg|.jpeg):The input to the model is an image that contains a face.
Output
- output - (image -.png): The output of the model is an image that has been processed by the model to identify and mark the facial landmarks. The model then draws a mesh over the face.
Examples
input | output |
---|---|
![]() | ![]() |
Usage for developers
Please find below the details to track the information and access the code for processing the model on our platform.
Requirements
pip install -r requirements.txt
Code based on AIOZ structure
import mediapipe as mp
import cv2
import os, torch
...
def do_ai_task(
input: Union[str, Path],
model_storage_directory: Union[str, Path],
device: Literal["cpu", "cuda", "gpu"] = "cpu",
*args, **kwargs) -> Any:
"""Define AI task: load model, pre-process, post-process, etc ..."""
# Validate input file exists
input_path = Path(input)
if not input_path.exists():
raise FileNotFoundError(f"Input image not found: {input}")
# Read and validate input image
input_image = cv2.imread(str(input))
# Initialize MediaPipe components
mp_face_mesh = mp.solutions.face_mesh
mp_drawing = mp.solutions.drawing_utils
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1)
output_path = "output.png"
with mp_face_mesh.FaceMesh(
static_image_mode=True,
max_num_faces=2,
min_detection_confidence=0.5) as face_mesh:
# Convert BGR to RGB and process
rgb_image = cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB)
results = face_mesh.process(rgb_image)
# Create annotated image
annotated_image = input_image.copy()
# Check if faces were detected
if results.multi_face_landmarks:
for face_landmarks in results.multi_face_landmarks:
mp_drawing.draw_landmarks(
image=annotated_image,
landmark_list=face_landmarks,
connections=mp_face_mesh.FACEMESH_TESSELATION,
landmark_drawing_spec=drawing_spec,
connection_drawing_spec=drawing_spec)
# Save annotated image
cv2.imwrite(str(output_path), annotated_image)
# Open file for return (will be managed by caller)
output_image = open(output_path, "rb")
# Convert landmarks to string representation
landmarks_str = str(results.multi_face_landmarks) if results.multi_face_landmarks else "No faces detected"
return output_image, landmarks_str
Reference
This repository is based on and inspired by Hysts's work. We sincerely appreciate their generosity in sharing the code.
License
We respect and comply with the terms of the author's license cited in the Reference section.