jbilcke-hf HF staff commited on
Commit
9f5222a
1 Parent(s): df92730
Files changed (1) hide show
  1. README.md +20 -54
README.md CHANGED
@@ -16,63 +16,29 @@ Warning: this is an experimental, proof-of-concept project made in a few days.
16
 
17
  It is not ready for production use by other people! Also, this use models that should only be used for research purposes (no commercial usage).
18
 
19
- Note: this won't work on iOS due to an apparent ban on Media Source Extensions (available on iPadOS).
20
 
21
- It should be possible however to use some other protocol or library.
22
 
23
- # Installation
24
- ## Building and run without Docker
 
 
25
 
26
- ```bash
27
- nvm use
28
- npm i
29
- ```
 
 
30
 
31
- First setup some env vars:
32
- WEBTV_VIDEOPATH="./sandbox/video"
33
- WEBTV_AUDIOPATH="./sandbox/audio"
34
- WEBTV_RTMP_URL="rtmp://localhost:1935/webtv"
 
35
 
 
 
 
36
 
37
- In a terminal, run:
38
-
39
- ```
40
- ./scripts/init.sh
41
- ```
42
-
43
- Then run:
44
-
45
- ```
46
- ./scripts/audio.sh
47
- ```
48
-
49
- In another terminal, run:
50
-
51
- ```
52
- ./scripts/video.sh
53
- ```
54
-
55
- In another terminal, run:
56
-
57
- ```
58
- ./scripts/stream.sh
59
- ```
60
-
61
- In another terminal, run:
62
-
63
- ```
64
- npm run start
65
- ```
66
-
67
- ## Building and running with Docker
68
-
69
- ```bash
70
- npm run docker
71
- ```
72
-
73
- This script is a shortcut executing the following commands:
74
-
75
- ```bash
76
- docker build -t ai-webtv .
77
- docker run -it -p 7860:7860 ai-webtv
78
- ```
 
16
 
17
  It is not ready for production use by other people! Also, this use models that should only be used for research purposes (no commercial usage).
18
 
19
+ Note: because the stream uses FLV, it doesn't work on iPhone. There is however a [Twitch mirror here](https://www.twitch.tv/ai_webtv).
20
 
21
+ The main code of the webtv is located inside the [media-server](https://huggingface.co/spaces/jbilcke-hf/media-server/tree/main) :
22
 
23
+ manual steps:
24
+ - human input to write a short paragraph describing a multi-shot video sequence
25
+ - manual submit it to GPT-4 to generate a list of video captions for each shot (the system instructions are extracts from a stable diffusion guide)
26
+ - commit the captions to the [playlist database](https://huggingface.co/spaces/jbilcke-hf/media-server/raw/main/database.json)
27
 
28
+ Inside the `media-server` space (generation process running in the background):
29
+ - for each prompt in the database
30
+ - generate a silent 3 seconds video clip with Zeroscope V2 576w (hosted on Hugging Face Spaces)
31
+ - upscale the clip with Zeroscope V2 XL (also a HF Space)
32
+ - perform frame interpolation with FILM (also a HF Space)
33
+ - storage in the Persistent Storage of the media-server Space
34
 
35
+ Inside the `media-server` space (streaming process running in the foreground):
36
+ - for each video file in the persistent storage folder
37
+ - add it to a new FFmpeg playlist (it's just a .txt file)
38
+ - broadcast it over the RTMP protocol using FFmpeg (in FLV format)
39
+ - diffusion of the stream using node-media-server
40
 
41
+ Inside the `AI-WebTV` space:
42
+ - display the stream using `mpegts.js`
43
+ - this doesn't work on iPhone, but now there is also a Twitch mirror
44