
Image to Sketch
Image to Sketch conversion is a fascinating process that involves transforming regular photographs or digital images into hand-drawn or pencil-like sketches. This technique has gained popularity among artists, designers, and photography enthusiasts as it offers a creative and artistic way to reinterpret and stylize images.
Image To Sketch
Summary
Introduction
The task of Image-to-Sketch involves converting digital images into sketches, replicating the artistic style of hand-drawn or pencil sketches. This process is achieved through various algorithms and techniques in computer vision and image processing.
By converting images to sketches, this task provides a valuable tool for artists and designers to explore creative possibilities, experiment with different styles, and generate unique visual outputs. It allows for the fusion of traditional and digital art techniques, enabling the production of captivating and expressive sketches with the convenience and flexibility of digital media.
Parameters
Inputs
- input - (image -.png|.jpg|.jpeg): This is the original image that the user wants to convert into a sketch.
- ver - (text): This represents the specific drawing style that the user wants to apply to the transformation. It could be different predefined styles, such as "style 1" or "style 2".
Output
- output - (image -.png): This is the result of the image-to-sketch transformation based on the specified style.
Examples
input | ver | output |
---|---|---|
![]() | style 1 | ![]() |
![]() | style 2 | ![]() |
Usage for developers
Please find below the details to track the information and access the code for processing the model on our platform.
Requirements
python~=3.10
torch~=2.0.0
Code based on AIOZ structure
import torchvision.transforms as transforms
from PIL import Image
from .model import Generator
import torch, os
...
def do_ai_task(
input: Union[str, Path],
ver: Union[int, float],
model_storage_directory: Union[str, Path],
device: Literal["cpu", "cuda", "gpu"] = "cpu",
*args, **kwargs) -> Any:
model_id1 = os.path.abspath(model_storage_directory + "...")
model1 = Generator(3, 1, 3)
model1.load_state_dict(torch.load(model_id1), map_location=device))
model1.eval()
model_id2 = os.path.abspath(model_storage_directory + "...")
model2 = Generator(3, 1, 3)
model2.load_state_dict(torch.load(model_id2), map_location=device))
model2.eval()
input_img = Image.open(input).convert('RGB')
transform = transforms.Compose([transforms.ToTensor()])
input_img = transform(input_img)
input_img = torch.unsqueeze(input_img, 0)
drawing = 0
with torch.no_grad():
if ver == 'style 2':
drawing = model2(input_img)[0].detach()
else:
drawing = model1(input_img)[0].detach()
drawing = transforms.ToPILImage()(drawing)
drawing.save("output.png")
output = open("output.png", "rb") # io.BufferedReader
return output
Reference
This repository is based on and inspired by Caroline Chan's work. We sincerely thank them for sharing the code.
License
We respect and comply with the terms of the author's license cited in the Reference section.