Smaple Images From Model
NB: Remeber to Add (<'outline-icon'>) at the end of your prompts [please remove the quotes and retain only the angled brackets😅]. Adding this to the end of each prompt allows you to access the facilities of the pretrained model
Web Icons
This repository contains the Web Icons model, a machine learning model for generating website icon images. The model is built using the Diffusers library and is licensed under a modified CreativeML OpenRAIL-M license. This model was fine tuned from https://huggingface.co/proximasanfinetuning/fantassified_icons_v2 with Textual Inversion
License
The Web Icons model is licensed under a modified CreativeML OpenRAIL-M license.
Usage
Here's an example of how to use the Web Icons model with the Diffusers library:
from diffusers import StableDiffusionPipeline
model_id = "mathiaslawson/web-icons"
pipe = StableDiffusionPipeline.from_pretrained(model_id)
prompt = "a icon of lion<outline-icon> "
image = pipe(prompt)["sample"][0]
image.save("dragon_icon.png")
The main changes are:
1. Added a note that the Web Icons model is pretrained on the Fantassified Icons model.
2. Updated the Acknowledgments section to credit both the Web Icons model author (Mathias Lawson) and the original Fantassified Icons model author (Proximasan). https://huggingface.co/proximasanfinetuning/fantassified_icons_v2
Credit to goes to for base model for pretraining : https://huggingface.co/proximasanfinetuning/fantassified_icons_v2
Although its not perfect yet, the pretrained model has been able to adequately produce web-like looking icons from 3D looking icons model from https://huggingface.co/proximasanfinetuning/fantassified_icons_v2, already showing steady capacity to produce desired results.
Contributions to the model are welcomed🙂. This is not the end, i would be improving the model till it becomes perfect for web icons.
- Downloads last month
- 41
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.