Should be really basic, but I can't install this.

#8
by Nafnlaus - opened

No matter what I try, I see no branches and can't do any git checkout commands to switch branches, without errors like "Error: pathspec 'https://github.com/CiaraStrawberry/TemporalKit/tree/TemporalNet' did not match any file(s) known to git" - either from trying to switch my old sd-webui-controlnet to TemporalNet, or (after having cloned TemporalNet separately) trying to switch it to TemporalKit/tree/TemporalNet.

Could you put the below into a plain-English series of git commands? Because I (and ChatGPT, for that matter ;) ) am stumped as to what series of steps you want taken.

========
This took some modification of the original controlnet code so you'll have to do some extra things. If you just want to run a gradio example or look at the modified controlnet code, that's here: https://github.com/CiaraStrawberry/TemporalNet Just drop the model from this directory into that model folder and make sure the gradio_temporalnet.py script points at the model.

To use with stable diffusion, you can either use it with TemporalKit by moving to the branch here after following steps 1 and 2: https://github.com/CiaraStrawberry/TemporalKit/tree/TemporalNet , or use it just by accessing the base api through the temporalvideo.py script:

  1. move your controlnet webui install to this branch: https://github.com/CiaraStrawberry/sd-webui-controlnet-TemporalNet-API
    .

========

(I'm wanting to try this branch because I tried the old TemporalNet script, and all I got was completely random images with zero consistency. I'd ideally like to work within Gradio rather than having to run with a script, if that's possible)

Oh, and on that note: "and make sure the gradio_temporalnet.py script points at the model." - when I cloned TemporalNet, the directory I got has nothing called "gradio_temporalnet.py". Nor is it even clear what you mean by "that model folder". From the AUTOMATIC1111 directory, do you mean models/Stable-diffusion, models/ControlNet, or extensions/sd-webui-controlnet[-TemporalNet-API]/models?

Lastly, is EBSynth required for this? As a Linux user, when I search for EBSynth, the overwhelming majority of hits are some Windows GUI program (and in your tutorial, you appear to show a Windows GUI program). I tried this:

https://github.com/jamriska/ebsynth.git

... but all the results I've gotten from it are garbage... so juddery it's just best to skip it altogether and deal with StableDiffusion's flicker.

Hi, I had similar issues. Check the solution in discussion #3 posted by @JasonGilholme . Works for me.

https://huggingface.co/CiaraRowles/TemporalNet2/discussions/3

I don't think you had similar issues. Sounds like from the very beginning you already had it installed and updated but were just having trouble running it, while I can't even get it installed because I don't know what git commands she intends for us to run.

@CiaraRowles Can you please provide a solid end to end guide in setting this up? Like E.G. do we need to uninstall old controlNET first, then install both of these extensions or just one of them? Like this is really not a solid explanation of what to do to get this running. I've installed both and when I restart the web ui its just erroring out with "CACHE_SIZE = shared.cmd_opts.controlnet_preprocessor_cache_size
AttributeError: 'Namespace' object has no attribute 'controlnet_preprocessor_cache_size'"

Ok, I ended up getting it working - going to share for others who are struggling. Here's the actual process for getting this to work.

  1. Backup / uninstall your old controlNET version.
  2. Install this https://github.com/CiaraStrawberry/sd-webui-controlnet-TemporalNet-API using web ui
  3. Install this https://github.com/CiaraStrawberry/TemporalNet using web ui
  4. Download the model, the model .yaml, and the temporalnedvideo.py from this model card
  5. Put model and the yaml file inside of your new stable-diffusion-webui\extensions\sd-webui-controlnet-TemporalNet-API\models
  6. Make sure you have openpose1.1 and hed1.1 also installed into this new model folder - you can grab it from your old controlnet folder.
  7. Make sure you have model caching disabled. If you had it enabled in your old controlNET settings, it will bomb out when you go to run this. You may need to go directly into the line in the script that's failing and set MODEL_CACHES = 0
  8. Fully restart your web ui
  9. Put temporalvideo.py into a new directory of your choosing and open it in the editor. In the arg parse section at the top, replace everything with your proper input, output, and init img file locations.
  10. Run temporalvideo.py using the same python distro that you run stable diffusion with. Very important for windows users with WSL or pycharm, etc. If you have multiple python installs, only the one that has
    all the packages installed for SD is going to work to run this.
  11. PROFIT! - This is actually sick and worth the effort to get it set up and try it out.
  1. Make sure you have openpose1.1 and hed1.1 also installed into this new model folder - you can grab it from your old controlnet folder.

Where can I find hed 1.1? The only version I can find is the 1.0 one: https://huggingface.co/lllyasviel/ControlNet/blob/main/models/control_sd15_hed.pth

Or should I use control_v11p_sd15_scribble.pth with the scribble_hed preprocessor?

Hey @frone125 , thanks so much for this guide! Although my computer isnt strong enough, you allowed me to install temporalnet on a rented GPU, which was quite a daunting task for someone like me!

I'd love to know tho, if you could share more informations regarding how you are tweaking the temporalvideo script? I tried various things, but here is the best result I could get :

ezgif-2-66cd85d258.gif

As a comparison, here is the result I have with temporalnet 1 :
smooth1.gif

Which has better style and consistency :/

Even tho you've already done a lot providing your advices, that'd be so amazing to share yet abit more of your knowledges!

I pretty much re-architected that script and tested a bunch of different controlNETs, but for the best results out of my different image generations, I found the best combo of 3 to be softedge, lineart, and temporalNET.

I don't have a huge requirement to maintain perfect temporal coherence, but with these 3 I could jack up my image denoise strength and get the coherence to look pretty good. Remember that the main benefit of temporalNET is that you are using optical flow. Optical flow is basically taking the delta between your previous frame and your current frame, figuring out the distance between the two and then making sure they have a smooth transition between them.

I noticed that stylistically the higher I made the weight of my temporalNET model, the worse the style got, but the more consistent it was. So I would suggest playing with the weights of your controlNETs as the main way to figure out what's best for a particular image sequence.

If you are new to Python, try to study that script and ask chatGPT for help, but it should be pretty easy if you want to add additional controlNETs or swap out the ones that are in there, the script is looking for them in particular directories and referencing their names in a consistent way that you can just change. Download the other controlNET models you want to try out, put them in the same directory with the others, then explain the situation in thorough english to chatGPT-4, and finally give it the entire script into hopefully it should help you modify the key lines (weights, models, etc)

  1. Make sure you have openpose1.1 and hed1.1 also installed into this new model folder - you can grab it from your old controlnet folder.

Where can I find hed 1.1? The only version I can find is the 1.0 one: https://huggingface.co/lllyasviel/ControlNet/blob/main/models/control_sd15_hed.pth

Or should I use control_v11p_sd15_scribble.pth with the scribble_hed preprocessor?

This was another gocha in her original guide. I should have mentioned it in the guide im sry ive been away: https://huggingface.co/lllyasviel/ControlNet-v1-1

Sign up or log in to comment