Anime_To_Sketch

Anime to Sketch

Anime to Sketch is a technique or tool that uses artificial intelligence to transform images in the anime style into black-and-white sketch drawings. This technique simplifies the complex details of anime images, retaining only the basic outlines and key elements to create a sketch that resembles a hand-drawn version.

mit
Image-to-Image
PyTorch
English
by @AIOZNetwork
2

Last updated: 21 days ago


Generic badge Generic badge

Anime To Sketch

Summary

Introduction

Anime to Sketch is a model used to convert anime images into sketches. This model aims to simulate the drawing style and representation commonly seen in comic books or anime animations.

The Anime to Sketch conversion process typically utilizes algorithms and deep learning techniques to analyze the anime image and apply rules for strokes, outlines, contrast, and underlying structure to create a similar sketch-like rendition. The model is trained on anime datasets and has the ability to automatically detect important features in the image to recreate a sketch-like output.

Parameters

Inputs

  • input - (image -.png|.jpg|.jpeg): The input of the model is an image in the anime style.

Output

  • output_image_1 - (image -.png): It is converted into a similar version of the sketch. The model applies various techniques, such as edge detection, line extraction, and texture simplification, to create a sketch-like output that resembles hand-drawn sketches.
  • output_image_2 - (image -.png): This output contains extracted features or information from the input anime image. It can include details such as lines, shapes, textures, or other relevant visual elements that are important for further analysis or processing.

Examples

inputoutput_image_1output_image_2

Usage for developers

Please find below the details to track the information and access the code for processing the model on our platform.

Requirements

python~=3.10
torch~=2.0.0
Pillow
opencv-python

Code based on AIOZ structure

from .manga_line_extraction.model import MangaLineExtractor
from .anime2sketch.model import Anime2Sketch
from PIL import Image
import cv2
import torch, os

...
def do_ai_task(
        input: Union[str, Path],
        model_storage_directory: Union[str, Path],
        device: Literal["cpu", "cuda", "gpu"] = "cpu",
        *args, **kwargs) -> Any:
        
        model_sketch = os.path.abspath(model_storage_directory + "...")
        to_sketch = Anime2Sketch(model_sketch, device)
        image = Image.open(input)
        result = to_sketch.predict(image)
        result.save("output_sketch.png")

        model_extractor = os.path.abspath(model_storage_directory + "...")
        image_2 = cv2.imread(input,cv2.IMREAD_GRAYSCALE)
        extractor = MangaLineExtractor(model_extractor, device)
        result_2 = extractor.predict(image_2)

        cv2.imwrite("output_extractor.png", result_2)

        output_image_1 = open("output_sketch.png", "rb")  # io.BufferedReader
        output_image_2 = open("output_extractor.png", "rb")  # io.BufferedReader
    return output_image_1, output_image_2

Reference

This repository is based on and inspired by Plat's work. We sincerely appreciate their generosity in sharing the code.

License

We respect and comply with the terms of the author's license cited in the Reference section.

Citations

@misc{Anime2Sketch,
  author = {Xiaoyu Xiang, Ding Liu, Xiao Yang, Yiheng Zhu, Xiaohui Shen},
  title = {Anime2Sketch: A Sketch Extractor for Anime Arts with Deep Networks},
  year = {2021}
}

@inproceedings{xiang2022adversarial,
  title={Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis},
  author={Xiang, Xiaoyu and Liu, Ding and Yang, Xiao and Zhu, Yiheng and Shen, Xiaohui and Allebach, Jan P},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  year={2022}
}