Edit model card

anywefadoor

Installation

the servers run in nodejs using express and are started on vast via PM2 (npm install -g pm2) process manager 2. other startup managers are available. start stop and check status with: pm2 start x pm2 restart x pm2 status

pm2 logs normally end up in the user folder of the user running the code under .pm2/logs/nameofscript.js

ssl_server.js in the ADOOR_ACE folder must be set to run on startup and ports need consideration for the anydoor process. for the cloth seg the seg_ssl_server.js in the huggingface-cloth-segmentation folder should also be set to run on start. pip requirements should be installed for the seg from inside that folder or conda can be utilised here too.

the two express servers are all that run here.

various necesary libs to install via apt will reveal themselves depending on hardware. libxext and others after the conda install and run first time.

Install with conda:

conda env create -f environment.yaml
conda activate anydoor

or pip:

pip install -r requirements.txt

Additionally, for training, you need to install panopticapi, pycocotools, and lvis-api.

pip install git+https://github.com/cocodataset/panopticapi.git

pip install pycocotools -i https://pypi.douban.com/simple

pip install lvis

Download WefaDoor Checkpoint

Download WefaDoor checkpoint:

Download DINOv2 checkpoint and revise /configs/anydoor.yaml for the path (line 83)

Download Stable Diffusion V2.1 if you want to train from scratch.

to run servers manualy use the following process:

first wefadoor..

cd /work/ADOOR_ACE

conda activate anydoor

start ssl_server.js (node or pm2)

python run_inference_api.py

then the following must be in a tmux shell as in a completely new shell not the shell session above: second hf seg..

cd /work/huggingface-cloth-segmentation

conda activate seg

start seg_ssl_server.js

leave both shells open in the instance and close the remote

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .