Edit model card

Model Description

This GAN-based model performs image colorization, transforming grayscale images into color images. It leverages a generator network to predict the color channels and a discriminator network to improve the colorization quality through adversarial training.

Model Details

Model Name: GAN Colorization Model Model Architecture: The model uses a ResNet-34 backbone as the encoder in the generator network and a PatchGAN discriminator network. Framework: PyTorch Repository: Hammad712/GAN-Colorization-Model

Model Training

Dataset

Dataset Used: COCO 2017 Training Size: 8000 images Validation Size: 2000 images Image Size: 256x256 pixels

Training Configuration

Batch Size: 16 Number of Epochs: 5 Optimizer for Generator: Adam (learning rate: 0.0004, betas: 0.5, 0.999) Optimizer for Discriminator: Adam (learning rate: 0.0004, betas: 0.5, 0.999)

Loss Functions:

GAN Loss: Binary Cross-Entropy Loss with Logits L1 Loss: L1 Loss for pixel-wise comparison between generated and real color channels

Inference Code

from huggingface_hub import hf_hub_download import torch from PIL import Image from torchvision import transforms from skimage.color import rgb2lab, lab2rgb import numpy as np import matplotlib.pyplot as plt

#Download the model from Hugging Face Hub repo_id = "Hammad712/GAN-Colorization-Model" model_filename = "generator.pt" model_path = hf_hub_download(repo_id=repo_id, filename=model_filename)

#Define the generator model (same architecture as used during training) from fastai.vision.learner import create_body from torchvision.models import resnet34 from fastai.vision.models.unet import DynamicUnet

def build_generator(n_input=1, n_output=2, size=256): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") backbone = create_body(resnet34(), pretrained=True, n_in=n_input, cut=-2) G_net = DynamicUnet(backbone, n_output, (size, size)).to(device) return G_net

#Initialize and load the model device = torch.device("cuda" if torch.cuda.is_available() else "cpu") G_net = build_generator(n_input=1, n_output=2, size=256) G_net.load_state_dict(torch.load(model_path, map_location=device)) G_net.eval()

#Preprocessing function def preprocess_image(img_path): img = Image.open(img_path).convert("RGB") img = transforms.Resize((256, 256), Image.BICUBIC)(img) img = np.array(img) img_to_lab = rgb2lab(img).astype("float32") img_to_lab = transforms.ToTensor()(img_to_lab) L = img_to_lab[[0], ...] / 50. - 1. return L.unsqueeze(0).to(device)

#Inference function def colorize_image(img_path, model): L = preprocess_image(img_path) with torch.no_grad(): ab = model(L) L = (L + 1.) * 50. ab = ab * 110. Lab = torch.cat([L, ab], dim=1).permute(0, 2, 3, 1).cpu().numpy() rgb_imgs = [] for img in Lab: img_rgb = lab2rgb(img) rgb_imgs.append(img_rgb) return np.stack(rgb_imgs, axis=0)

#Example image path

img_path = "/path/to/your/image.jpg" # Replace with your image path

#Perform inference

colorized_images = colorize_image(img_path, G_net)

#Display the result

plt.imshow(colorized_images[0]) plt.axis("off") plt.show()

Usage

To use the model for image colorization, ensure that the dependencies are installed and run the inference code provided. You will need to replace the image path with your own image for colorization.

Model Performance

Qualitative Results The model generates visually plausible colorizations for grayscale images. Here are some examples of colorized outputs:

Limitations

The model may not always produce accurate colors for objects with complex or unusual color distributions. Performance may degrade for images that significantly differ from the training dataset. Citation If you use this model in your research, please cite the original repository:

bibtex

@misc{Hammad7122023GANColorization, title={GAN-Colorization-Model}, author={Hammad712}, year={2023}, publisher={Hugging Face}, howpublished={\url{https://huggingface.co/Hammad712/GAN-Colorization-Model}}, }

Contact

For any issues or inquiries, please reach out to the model author through the Hugging Face repository.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) does not yet support fastai models for this pipeline type.

Dataset used to train Hammad712/GAN-Colorization-Model

Space using Hammad712/GAN-Colorization-Model 1