
Image Super-Resolution with SeemoRe
Image Super-Resolution with SeemoRe is a task aimed at improving the process of image super-resolution by leveraging expertise in the field. This task involves incorporating techniques that identify and utilize expert knowledge or specialized information to enhance the efficiency and accuracy of image upscaling.
Image Super-Resolution with SeemoRe
Summary
Introduction
Reconstructing high-resolution (HR) images from low-resolution (LR) inputs remains a core challenge in image super-resolution (SR), especially when balancing performance with computational efficiency. While many recent methods achieve high-quality results through complex and diverse operations, these often come at the cost of increased inference time and resource demands.
SeemoRe addresses this issue through a novel expert mining strategy, forming an efficient and collaborative SR architecture. Instead of simply stacking multiple operations, SeemoRe introduces experts at different levels of representation. At a macro level, it mines rank-wise and spatial-wise informative features to form a global understanding of the image. At a finer level, it employs a mixture of low-rank experts, each specialized in specific patterns or attributes crucial for accurate reconstruction.
By combining these expert pathways, SeemoRe effectively "sees more" of the subtle intra-feature relationships, enabling high-fidelity reconstruction without excessive computational cost. This design makes SeemoRe especially suitable for real-time or resource-constrained super-resolution applications.
Parameters
Inputs
- input_image - (image -.png|.jpg|.jpeg): The input to the model is a low-resolution image that requires detail enhancement and resolution upscaling.
Output
- output_image - (image -.png): The output is a super-resolved, high-quality version of the input image, generated using the SeemoRe model with expert mining.
Examples
input | output |
---|---|
![]() | ![]() |
![]() | ![]() |
![]() | ![]() |
Usage for developers
Please find below the details to track the information and access the code for processing the model on our platform.
Requirements
pip install -r requirements.txt
Code based on AIOZ structure
import yaml
import torch
from torch.nn.parallel import DataParallel, DistributedDataParallel
from .utils import dict_to_namespace, load_image, tensor_to_image
from .seemore import SeemoRe
...
def do_ai_task(
input_image: Union[str, Path],
model_storage_directory: Union[str, Path],
device: Literal["cpu", "cuda", "gpu"] = "cpu",
*args, **kwargs) -> Any:
input_image = Path(input_image)
model_storage_directory = Path(model_storage_directory)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load config and weights
config_path = model_storage_directory / "..."
weights_path = model_storage_directory / "..."
with open(config_path, "r") as f:
config = yaml.safe_load(f)
cfg = dict_to_namespace(config)
model = SeemoRe(
scale=cfg.model.scale,
in_chans=cfg.model.in_chans,
num_experts=cfg.model.num_experts,
num_layers=cfg.model.num_layers,
embedding_dim=cfg.model.embedding_dim,
img_range=cfg.model.img_range,
use_shuffle=cfg.model.use_shuffle,
global_kernel_size=cfg.model.global_kernel_size,
recursive=cfg.model.recursive,
lr_space=cfg.model.lr_space,
topk=cfg.model.topk
).to(device)
load_model_weights(model, weights_path)
# Preprocess
img_np = load_image(input_image)
img_tensor = torch.from_numpy(img_np).permute(2, 0, 1).unsqueeze(0).to(device)
# Inference
with torch.no_grad():
output_tensor = model(img_tensor)
# Postprocess
output_image = tensor_to_image(output_tensor)
output_path = "output.png"
output_image.save(output_path)
return open(output_path, "rb") # io.BufferedReader
Reference
This repository is based on and inspired by Eduard Zamfir's work. We sincerely appreciate their generosity in sharing the code.
License
We respect and comply with the terms of the author's license cited in the Reference section.
Citation
@inproceedings{zamfir2024details,
title={See More Details: Efficient Image Super-Resolution by Experts Mining},
author={Eduard Zamfir and Zongwei Wu and Nancy Mehta and Yulun Zhang and Radu Timofte},
booktitle={International Conference on Machine Learning},
year={2024},
organization={PMLR}
}