Help in locking keyframes in diffusers like the sliding context in the comfyui nodes
Hi, first of all, great release!
Its mind blowing seeing animatediff go from like hours of generation time to THIS lol
Anyways, Im not that confident with the python code so Im not quite sure where to start,
I'd need to lock the first 2-4 frames in each generation with the last frames from the previous generation (like the sliding context window implementations, but I dont care about the "sliding context" part as I need it to be able to be called sequentially)
Effectively a AnimDiffVideo2VideoPipeline with an extra argument to lock the first few frames.
Ive tried to to realtime vid2vid and it works really great, but obv its more like individual clips stitched together.
Maybe you can help me guide in the right direction? I could probably hack together an animatediff controlnet pipeline and try to lock frames with controlnet tile but that feels like a dirty solution, since the "correct" way of locking keyframes has already been solved, just cant wrap my head around the different implementations and hacks.
Here a showcase of my efforts so far :)
https://www.youtube.com/watch?v=C0B5X24sU7s
https://www.youtube.com/watch?v=5RjnO5stbMg
It feels like realtime AI video is just around the corner!