Anime_Background_Style_Transfer

Anime Background Style Transfer

Anime backgrounds, also known as anime backgrounds art or anime scenery, refer to the visual elements that form the backdrop of animated scenes in anime. These backgrounds are carefully designed and illustrated to provide the setting, atmosphere, and context for the characters and events within the anime.

mit
Image-to-Image
PyTorch
English
by @AIOZNetwork
7

Last updated: 20 days ago


Generic badge Generic badge

Anime Background Style Transfer

Summary

Introduction

Anime backgrounds, also known as anime backgrounds art or anime scenery, refer to the visual elements that form the backdrop of animated scenes in anime. These backgrounds are carefully designed and illustrated to provide the setting, atmosphere, and context for the characters and events within the anime.

Parameters

Inputs

  • input - (image -.png|.jpg|.jpeg): The model utilizes real-life images as input to transform them into the style of anime backgrounds, incorporating objects and scenes from the real world.
  • style - (text): This input is used to determine the style of the generated anime background. There are four supported styles: Makoto Shinkai, Mamoru Hosoda, Hayao Miyazaki and Satoshi Kon. The user can choose a style to specify how the anime background image will be generated.

Output

  • output - (image -.png): This is the output of the model, which is the generated anime-style background based on the input image and selected style.

Examples

inputstyleoutput
Makoto Shinkai
Hayao Miyazaki

Usage for developers

Please find below the details to track the information and access the code for processing the model on our platform.

Requirements

torch
Pillow
numpy

Code based on AIOZ structure

import os
import torch
import numpy as np
import torchvision.transforms as transforms
from .network.Transformer import Transformer
from PIL import Image

...
def get_model(style, path):
    ...
    
def adjust_image_for_model(img):
    ...
    
def do_ai_task(
        input: Union[str, Path],
        style: Union[str, Path],
        model_storage_directory: Union[str, Path],
        device: Literal["cpu", "cuda", "gpu"] = "cpu",
        *args, **kwargs) -> Any:
    """Define AI task: load model, pre-process, post-process, etc ..."""
    # Define AI task workflow. Below is an example
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    output_path = input[:-4] + "_output.png" 

    input_image = Image.open(input).convert("RGB")
    img = adjust_image_for_model(input_image)

    # load image
    input_img = np.asarray(img)
    # RGB -> BGR
    input_img = input_img[:, :, [2, 1, 0]]
    input_img = transforms.ToTensor()(input_img).unsqueeze(0)
    # preprocess, (-1, 1)
    input_img = -1 + 2 * input_img

    model = get_model(style, model_storage_directory, device)

    enable_gpu = torch.cuda.is_available()
    if enable_gpu:
        # Allows to specify a card for calculation
        input_img = input_img.to(device)
    else:
        input_img = input_img.float()
    
    with torch.no_grad():
        output_image = model(input_img)
        output_image = output_image[0]

    # BGR -> RGB
    output_image = output_image[[2, 1, 0], :, :]
    output_image = output_image.data.cpu().float() * 0.5 + 0.5
    transforms.ToPILImage()(output_image).save(output_path)

    output_image = open(output_path, "rb")  # io.BufferedReader
    return output_image

Reference

This repository is based on and inspired by Akiyama Sho's work. We sincerely thank them for sharing the code.

License

We respect and comply with the terms of the author's license cited in the Reference section.