Spaces:
Build error
Build error
update
Browse files- README.md +0 -78
- app.py +109 -90
- assets/palm_prompts.toml +201 -0
- constants/desc.py +1 -1
- constants/init_values.py +0 -1
- interfaces/chat_ui.py +60 -72
- interfaces/export_ui.py +7 -7
- interfaces/plot_gen_ui.py +47 -152
- interfaces/story_gen_ui.py +113 -152
- interfaces/ui.py +38 -13
- interfaces/utils.py +36 -22
- interfaces/view_change_ui.py +91 -1
- modules/__init__.py +1 -0
- modules/image_maker.py +44 -26
- modules/music_maker.py +2 -0
- modules/palmchat.py +11 -8
README.md
CHANGED
@@ -7,81 +7,3 @@ app_file: app.py
|
|
7 |
pinned: false
|
8 |
license: apache-2.0
|
9 |
---
|
10 |
-
|
11 |
-
# Zero2Story
|
12 |
-
|
13 |
-
![](assets/overview.png)
|
14 |
-
|
15 |
-
Zero2Story is a framework built on top of [PaLM API](https://developers.generativeai.google), [Stable Diffusion](https://en.wikipedia.org/wiki/Stable_Diffusion), [MusicGen](https://audiocraft.metademolab.com/musicgen.html) for ordinary people to create their own stories. This framework consists of the **background setup**, **character setup**, and **interative story generation** phases.
|
16 |
-
|
17 |
-
**1. Background setup**: In this phase, users can setup the genre, place, and mood of the story. Especially, genre is the key that others are depending on.
|
18 |
-
|
19 |
-
**2. Character setup**: In this phase, users can setup characters up to four. For each character, users can decide their characteristics and basic information such as name, age, MBTI, and personality. Also, the image of each character could be generated based on the information using Stable Diffusion.
|
20 |
-
- PaLM API translates the given character information into a list of keywords that Stable Diffusion could effectively understands.
|
21 |
-
- Then, Stable Diffusion generates images using the keywords as a prompt.
|
22 |
-
|
23 |
-
**3. Interactive story generation:**: In this phase, the first few paragraphs are generated solely based on the information from the background and character setup phases. Afterwards, users could choose a direction from the given three options that PaLM API generated. Then, further stories are generated based on users' choice. This cycle of choosing an option and generating further stories are interatively continued until users decides to stop.
|
24 |
-
- In each story generation, users also could generate background images and music that describe each scene using Stable Diffusion and MusicGen.
|
25 |
-
- If the generated story, options, image, and music in each turn, users could ask to re-generate them.
|
26 |
-
|
27 |
-
## Prerequisites
|
28 |
-
|
29 |
-
### PaLM API key
|
30 |
-
|
31 |
-
This project heavily depends on [PaLM API](https://developers.generativeai.google). If you want to run it on your own environment, you need to get [PaLM API key](https://developers.generativeai.google/tutorials/setup) and paste it in `.palm_api_key.txt` file within the root directory.
|
32 |
-
|
33 |
-
### Packages
|
34 |
-
|
35 |
-
Make sure you have installed all of the following prerequisites on your development machine:
|
36 |
-
* CUDA Toolkit 11.8 with cuDNN 8 - [Download & Install CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit) It is highly recommended to run on a GPU. If you run it in a CPU environment, it will be very slow.
|
37 |
-
* Poetry - [Download & Install Poetry](https://python-poetry.org/docs/#installation) It is the python packaging and dependency manager.
|
38 |
-
* SQLite3 v3.35.0 or higher - It is required to be installed due to dependencies.
|
39 |
-
- Ubuntu 22.04 and later
|
40 |
-
```shell
|
41 |
-
$ sudo apt install libc6 sqlite3 libsqlite3
|
42 |
-
```
|
43 |
-
- Ubuntu 20.04
|
44 |
-
```shell
|
45 |
-
$ sudo sh -c 'cat <<EOF >> /etc/apt/sources.list
|
46 |
-
deb http://archive.ubuntu.com/ubuntu/ jammy main
|
47 |
-
deb http://security.ubuntu.com/ubuntu/ jammy-security main
|
48 |
-
EOF'
|
49 |
-
$ sudo apt update
|
50 |
-
$ sudo apt install libc6 sqlite3 libsqlite3
|
51 |
-
```
|
52 |
-
* FFmpeg (Optional) - Installing FFmpeg enables local video mixing, which in turn generates results more quickly than [other methods](https://huggingface.co/spaces/fffiloni/animated-audio-visualizer)
|
53 |
-
```shell
|
54 |
-
$ sudo apt install ffmpeg
|
55 |
-
|
56 |
-
## Installation
|
57 |
-
|
58 |
-
Before running the application for the first time, install the required dependencies:
|
59 |
-
```shell
|
60 |
-
$ poetry install
|
61 |
-
```
|
62 |
-
|
63 |
-
If dependencies change or need updates in the future, you can use:
|
64 |
-
```shell
|
65 |
-
$ poetry update
|
66 |
-
```
|
67 |
-
|
68 |
-
## Run
|
69 |
-
|
70 |
-
```shell
|
71 |
-
$ poetry run python app.py
|
72 |
-
```
|
73 |
-
|
74 |
-
## Todo
|
75 |
-
|
76 |
-
- [ ] Exporting of generated stories as PDF
|
77 |
-
|
78 |
-
|
79 |
-
## Stable Diffusion Model Information
|
80 |
-
|
81 |
-
### Checkpoints
|
82 |
-
- For character image generation: [CIVIT.AI Model 129896](https://civitai.com/models/129896)
|
83 |
-
- For background image generation: [CIVIT.AI Model 93931](https://civitai.com/models/93931?modelVersionId=148652)
|
84 |
-
|
85 |
-
### VAEs
|
86 |
-
- For character image generation: [CIVIT.AI Model 23906](https://civitai.com/models/23906)
|
87 |
-
- For background image generation: [CIVIT.AI Model 65728](https://civitai.com/models/65728)
|
|
|
7 |
pinned: false
|
8 |
license: apache-2.0
|
9 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
app.py
CHANGED
@@ -301,39 +301,45 @@ with gr.Blocks(css=STYLE) as demo:
|
|
301 |
clear_btn = gr.Button("clear", elem_classes=["control-button"])
|
302 |
|
303 |
pre_to_setup_btn.click(
|
304 |
-
|
305 |
inputs=None,
|
306 |
-
outputs=[pre_phase, background_setup_phase]
|
|
|
307 |
)
|
308 |
|
309 |
back_to_pre_btn.click(
|
310 |
-
|
311 |
inputs=None,
|
312 |
-
outputs=[pre_phase, background_setup_phase]
|
|
|
313 |
)
|
314 |
|
315 |
world_setup_confirm_btn.click(
|
316 |
-
|
317 |
inputs=None,
|
318 |
-
outputs=[background_setup_phase, character_setup_phase]
|
|
|
319 |
)
|
320 |
|
321 |
back_to_background_setup_btn.click(
|
322 |
-
|
323 |
inputs=None,
|
324 |
-
outputs=[background_setup_phase, character_setup_phase]
|
|
|
325 |
)
|
326 |
|
327 |
restart_from_story_generation_btn.click(
|
328 |
-
|
329 |
inputs=None,
|
330 |
-
outputs=[pre_phase, writing_phase]
|
|
|
331 |
)
|
332 |
|
333 |
story_writing_done_btn.click(
|
334 |
-
|
335 |
inputs=None,
|
336 |
-
outputs=[writing_phase, export_phase]
|
|
|
337 |
)
|
338 |
|
339 |
title_gen_btn.click(
|
@@ -343,18 +349,20 @@ with gr.Blocks(css=STYLE) as demo:
|
|
343 |
)
|
344 |
|
345 |
export_done_btn.click(
|
346 |
-
|
347 |
inputs=None,
|
348 |
-
outputs=[export_phase, export_view_phase]
|
349 |
-
|
|
|
|
|
350 |
export_ui.export,
|
351 |
inputs=[
|
352 |
title_txt,
|
353 |
cursors,
|
354 |
-
selected_main_char_image1, name_txt1, age_dd1,
|
355 |
-
side_char_enable_ckb1, selected_side_char_image1, name_txt2, age_dd2,
|
356 |
-
side_char_enable_ckb2, selected_side_char_image2, name_txt3, age_dd3,
|
357 |
-
side_char_enable_ckb3, selected_side_char_image3, name_txt4, age_dd4,
|
358 |
],
|
359 |
outputs=[
|
360 |
export_html
|
@@ -362,16 +370,19 @@ with gr.Blocks(css=STYLE) as demo:
|
|
362 |
)
|
363 |
|
364 |
back_to_story_writing_btn.click(
|
365 |
-
|
366 |
inputs=None,
|
367 |
-
outputs=[writing_phase, export_phase]
|
|
|
368 |
)
|
369 |
|
370 |
restart_from_export_view_btn.click(
|
371 |
-
|
372 |
inputs=None,
|
373 |
-
outputs=[pre_phase, export_view_phase]
|
374 |
-
|
|
|
|
|
375 |
ui.reset,
|
376 |
inputs=None,
|
377 |
outputs=[
|
@@ -392,10 +403,12 @@ with gr.Blocks(css=STYLE) as demo:
|
|
392 |
)
|
393 |
|
394 |
restart_from_export_btn.click(
|
395 |
-
|
396 |
inputs=None,
|
397 |
-
outputs=[pre_phase, export_phase]
|
398 |
-
|
|
|
|
|
399 |
ui.reset,
|
400 |
inputs=None,
|
401 |
outputs=[
|
@@ -416,10 +429,12 @@ with gr.Blocks(css=STYLE) as demo:
|
|
416 |
)
|
417 |
|
418 |
character_setup_confirm_btn.click(
|
419 |
-
|
420 |
inputs=None,
|
421 |
-
outputs=[character_setup_phase, writing_phase]
|
422 |
-
|
|
|
|
|
423 |
story_gen_ui.disable_btns,
|
424 |
inputs=None,
|
425 |
outputs=[
|
@@ -434,10 +449,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
434 |
inputs=[
|
435 |
cursors,
|
436 |
genre_dd, place_dd, mood_dd,
|
437 |
-
name_txt1, age_dd1,
|
438 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
439 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
440 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
441 |
],
|
442 |
outputs=[
|
443 |
cursors, cur_cursor, story_content, story_progress, image_gen_btn, audio_gen_btn,
|
@@ -448,10 +463,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
448 |
inputs=[
|
449 |
cursors,
|
450 |
genre_dd, place_dd, mood_dd,
|
451 |
-
name_txt1, age_dd1,
|
452 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
453 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
454 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
455 |
],
|
456 |
outputs=[
|
457 |
action_btn1, action_btn2, action_btn3, progress_comp
|
@@ -483,10 +498,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
483 |
inputs=[
|
484 |
cursors,
|
485 |
genre_dd, place_dd, mood_dd,
|
486 |
-
name_txt1, age_dd1,
|
487 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
488 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
489 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
490 |
],
|
491 |
outputs=[
|
492 |
action_btn1, action_btn2, action_btn3, progress_comp
|
@@ -518,10 +533,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
518 |
inputs=[
|
519 |
cursors, cur_cursor,
|
520 |
genre_dd, place_dd, mood_dd,
|
521 |
-
name_txt1, age_dd1,
|
522 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
523 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
524 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
525 |
],
|
526 |
outputs=[
|
527 |
cursors, cur_cursor, story_content, story_progress, image_gen_btn, audio_gen_btn
|
@@ -531,10 +546,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
531 |
inputs=[
|
532 |
cursors,
|
533 |
genre_dd, place_dd, mood_dd,
|
534 |
-
name_txt1, age_dd1,
|
535 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
536 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
537 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
538 |
],
|
539 |
outputs=[
|
540 |
action_btn1, action_btn2, action_btn3, progress_comp
|
@@ -561,50 +576,54 @@ with gr.Blocks(css=STYLE) as demo:
|
|
561 |
gen_char_btn1.click(
|
562 |
ui.gen_character_image,
|
563 |
inputs=[
|
564 |
-
gallery_images1, name_txt1, age_dd1,
|
565 |
outputs=[char_gallery1, gallery_images1, selected_main_char_image1]
|
566 |
)
|
567 |
|
568 |
gen_char_btn2.click(
|
569 |
ui.gen_character_image,
|
570 |
-
inputs=[gallery_images2, name_txt2, age_dd2,
|
571 |
outputs=[char_gallery2, gallery_images2, selected_side_char_image1]
|
572 |
)
|
573 |
|
574 |
gen_char_btn3.click(
|
575 |
ui.gen_character_image,
|
576 |
-
inputs=[gallery_images3, name_txt3, age_dd3,
|
577 |
outputs=[char_gallery3, gallery_images3, selected_side_char_image2]
|
578 |
)
|
579 |
|
580 |
gen_char_btn4.click(
|
581 |
ui.gen_character_image,
|
582 |
-
inputs=[gallery_images4, name_txt4, age_dd4,
|
583 |
outputs=[char_gallery4, gallery_images4, selected_side_char_image3]
|
584 |
)
|
585 |
|
586 |
random_name_btn1.click(
|
587 |
-
|
588 |
inputs=[name_txt1, name_txt2, name_txt3, name_txt4],
|
589 |
outputs=[name_txt1],
|
|
|
590 |
)
|
591 |
|
592 |
random_name_btn2.click(
|
593 |
-
|
594 |
inputs=[name_txt2, name_txt1, name_txt3, name_txt4],
|
595 |
outputs=[name_txt2],
|
|
|
596 |
)
|
597 |
|
598 |
random_name_btn3.click(
|
599 |
-
|
600 |
inputs=[name_txt3, name_txt1, name_txt2, name_txt4],
|
601 |
outputs=[name_txt3],
|
|
|
602 |
)
|
603 |
|
604 |
random_name_btn4.click(
|
605 |
-
|
606 |
inputs=[name_txt4, name_txt1, name_txt2, name_txt3],
|
607 |
outputs=[name_txt4],
|
|
|
608 |
)
|
609 |
|
610 |
### Story generation
|
@@ -732,10 +751,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
732 |
cursors,
|
733 |
action_btn1,
|
734 |
genre_dd, place_dd, mood_dd,
|
735 |
-
name_txt1, age_dd1,
|
736 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
737 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
738 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
739 |
],
|
740 |
outputs=[
|
741 |
cursors, cur_cursor,
|
@@ -748,10 +767,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
748 |
inputs=[
|
749 |
cursors,
|
750 |
genre_dd, place_dd, mood_dd,
|
751 |
-
name_txt1, age_dd1,
|
752 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
753 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
754 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
755 |
],
|
756 |
outputs=[
|
757 |
action_btn1, action_btn2, action_btn3, progress_comp
|
@@ -784,10 +803,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
784 |
cursors,
|
785 |
action_btn2,
|
786 |
genre_dd, place_dd, mood_dd,
|
787 |
-
name_txt1, age_dd1,
|
788 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
789 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
790 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
791 |
],
|
792 |
outputs=[
|
793 |
cursors, cur_cursor,
|
@@ -800,10 +819,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
800 |
inputs=[
|
801 |
cursors,
|
802 |
genre_dd, place_dd, mood_dd,
|
803 |
-
name_txt1, age_dd1,
|
804 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
805 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
806 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
807 |
],
|
808 |
outputs=[
|
809 |
action_btn1, action_btn2, action_btn3, progress_comp
|
@@ -836,10 +855,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
836 |
cursors,
|
837 |
action_btn3,
|
838 |
genre_dd, place_dd, mood_dd,
|
839 |
-
name_txt1, age_dd1,
|
840 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
841 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
842 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
843 |
],
|
844 |
outputs=[
|
845 |
cursors, cur_cursor,
|
@@ -852,10 +871,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
852 |
inputs=[
|
853 |
cursors,
|
854 |
genre_dd, place_dd, mood_dd,
|
855 |
-
name_txt1, age_dd1,
|
856 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
857 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
858 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
859 |
],
|
860 |
outputs=[
|
861 |
action_btn1, action_btn2, action_btn3, progress_comp
|
@@ -888,10 +907,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
888 |
cursors,
|
889 |
custom_action_txt,
|
890 |
genre_dd, place_dd, mood_dd,
|
891 |
-
name_txt1, age_dd1,
|
892 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
893 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
894 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
895 |
],
|
896 |
outputs=[
|
897 |
cursors, cur_cursor,
|
@@ -904,10 +923,10 @@ with gr.Blocks(css=STYLE) as demo:
|
|
904 |
inputs=[
|
905 |
cursors,
|
906 |
genre_dd, place_dd, mood_dd,
|
907 |
-
name_txt1, age_dd1,
|
908 |
-
side_char_enable_ckb1, name_txt2, age_dd2,
|
909 |
-
side_char_enable_ckb2, name_txt3, age_dd3,
|
910 |
-
side_char_enable_ckb3, name_txt4, age_dd4,
|
911 |
],
|
912 |
outputs=[
|
913 |
action_btn1, action_btn2, action_btn3, progress_comp
|
|
|
301 |
clear_btn = gr.Button("clear", elem_classes=["control-button"])
|
302 |
|
303 |
pre_to_setup_btn.click(
|
304 |
+
fn=None,
|
305 |
inputs=None,
|
306 |
+
outputs=[pre_phase, background_setup_phase],
|
307 |
+
_js=view_change_ui.pre_to_setup_js
|
308 |
)
|
309 |
|
310 |
back_to_pre_btn.click(
|
311 |
+
fn=None,
|
312 |
inputs=None,
|
313 |
+
outputs=[pre_phase, background_setup_phase],
|
314 |
+
_js=view_change_ui.back_to_pre_js
|
315 |
)
|
316 |
|
317 |
world_setup_confirm_btn.click(
|
318 |
+
fn=None,
|
319 |
inputs=None,
|
320 |
+
outputs=[background_setup_phase, character_setup_phase],
|
321 |
+
_js=view_change_ui.world_setup_confirm_js
|
322 |
)
|
323 |
|
324 |
back_to_background_setup_btn.click(
|
325 |
+
fn=None,
|
326 |
inputs=None,
|
327 |
+
outputs=[background_setup_phase, character_setup_phase],
|
328 |
+
_js=view_change_ui.back_to_background_setup_js
|
329 |
)
|
330 |
|
331 |
restart_from_story_generation_btn.click(
|
332 |
+
fn=None,
|
333 |
inputs=None,
|
334 |
+
outputs=[pre_phase, writing_phase],
|
335 |
+
_js=view_change_ui.restart_from_story_generation_js
|
336 |
)
|
337 |
|
338 |
story_writing_done_btn.click(
|
339 |
+
fn=None,
|
340 |
inputs=None,
|
341 |
+
outputs=[writing_phase, export_phase],
|
342 |
+
_js=view_change_ui.story_writing_done_js
|
343 |
)
|
344 |
|
345 |
title_gen_btn.click(
|
|
|
349 |
)
|
350 |
|
351 |
export_done_btn.click(
|
352 |
+
fn=None,
|
353 |
inputs=None,
|
354 |
+
outputs=[export_phase, export_view_phase],
|
355 |
+
_js=view_change_ui.export_done_js
|
356 |
+
)
|
357 |
+
export_done_btn.click(
|
358 |
export_ui.export,
|
359 |
inputs=[
|
360 |
title_txt,
|
361 |
cursors,
|
362 |
+
selected_main_char_image1, name_txt1, age_dd1, personality_dd1, job_dd1,
|
363 |
+
side_char_enable_ckb1, selected_side_char_image1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
364 |
+
side_char_enable_ckb2, selected_side_char_image2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
365 |
+
side_char_enable_ckb3, selected_side_char_image3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
366 |
],
|
367 |
outputs=[
|
368 |
export_html
|
|
|
370 |
)
|
371 |
|
372 |
back_to_story_writing_btn.click(
|
373 |
+
fn=None,
|
374 |
inputs=None,
|
375 |
+
outputs=[writing_phase, export_phase],
|
376 |
+
_js=view_change_ui.back_to_story_writing_js
|
377 |
)
|
378 |
|
379 |
restart_from_export_view_btn.click(
|
380 |
+
fn=None,
|
381 |
inputs=None,
|
382 |
+
outputs=[pre_phase, export_view_phase],
|
383 |
+
_js=view_change_ui.restart_from_export_view_js
|
384 |
+
)
|
385 |
+
restart_from_export_view_btn.click(
|
386 |
ui.reset,
|
387 |
inputs=None,
|
388 |
outputs=[
|
|
|
403 |
)
|
404 |
|
405 |
restart_from_export_btn.click(
|
406 |
+
fn=None,
|
407 |
inputs=None,
|
408 |
+
outputs=[pre_phase, export_phase],
|
409 |
+
_js=view_change_ui.restart_from_export_js
|
410 |
+
)
|
411 |
+
restart_from_export_btn.click(
|
412 |
ui.reset,
|
413 |
inputs=None,
|
414 |
outputs=[
|
|
|
429 |
)
|
430 |
|
431 |
character_setup_confirm_btn.click(
|
432 |
+
fn=None,
|
433 |
inputs=None,
|
434 |
+
outputs=[character_setup_phase, writing_phase],
|
435 |
+
_js=view_change_ui.character_setup_confirm_js
|
436 |
+
)
|
437 |
+
character_setup_confirm_btn.click(
|
438 |
story_gen_ui.disable_btns,
|
439 |
inputs=None,
|
440 |
outputs=[
|
|
|
449 |
inputs=[
|
450 |
cursors,
|
451 |
genre_dd, place_dd, mood_dd,
|
452 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
453 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
454 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
455 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
456 |
],
|
457 |
outputs=[
|
458 |
cursors, cur_cursor, story_content, story_progress, image_gen_btn, audio_gen_btn,
|
|
|
463 |
inputs=[
|
464 |
cursors,
|
465 |
genre_dd, place_dd, mood_dd,
|
466 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
467 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
468 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
469 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
470 |
],
|
471 |
outputs=[
|
472 |
action_btn1, action_btn2, action_btn3, progress_comp
|
|
|
498 |
inputs=[
|
499 |
cursors,
|
500 |
genre_dd, place_dd, mood_dd,
|
501 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
502 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
503 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
504 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
505 |
],
|
506 |
outputs=[
|
507 |
action_btn1, action_btn2, action_btn3, progress_comp
|
|
|
533 |
inputs=[
|
534 |
cursors, cur_cursor,
|
535 |
genre_dd, place_dd, mood_dd,
|
536 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
537 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
538 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
539 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
540 |
],
|
541 |
outputs=[
|
542 |
cursors, cur_cursor, story_content, story_progress, image_gen_btn, audio_gen_btn
|
|
|
546 |
inputs=[
|
547 |
cursors,
|
548 |
genre_dd, place_dd, mood_dd,
|
549 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
550 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
551 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
552 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
553 |
],
|
554 |
outputs=[
|
555 |
action_btn1, action_btn2, action_btn3, progress_comp
|
|
|
576 |
gen_char_btn1.click(
|
577 |
ui.gen_character_image,
|
578 |
inputs=[
|
579 |
+
gallery_images1, name_txt1, age_dd1, personality_dd1, job_dd1, genre_dd, place_dd, mood_dd, creative_dd1],
|
580 |
outputs=[char_gallery1, gallery_images1, selected_main_char_image1]
|
581 |
)
|
582 |
|
583 |
gen_char_btn2.click(
|
584 |
ui.gen_character_image,
|
585 |
+
inputs=[gallery_images2, name_txt2, age_dd2, personality_dd2, job_dd2, genre_dd, place_dd, mood_dd, creative_dd2],
|
586 |
outputs=[char_gallery2, gallery_images2, selected_side_char_image1]
|
587 |
)
|
588 |
|
589 |
gen_char_btn3.click(
|
590 |
ui.gen_character_image,
|
591 |
+
inputs=[gallery_images3, name_txt3, age_dd3, personality_dd3, job_dd3, genre_dd, place_dd, mood_dd, creative_dd3],
|
592 |
outputs=[char_gallery3, gallery_images3, selected_side_char_image2]
|
593 |
)
|
594 |
|
595 |
gen_char_btn4.click(
|
596 |
ui.gen_character_image,
|
597 |
+
inputs=[gallery_images4, name_txt4, age_dd4, personality_dd4, job_dd4, genre_dd, place_dd, mood_dd, creative_dd4],
|
598 |
outputs=[char_gallery4, gallery_images4, selected_side_char_image3]
|
599 |
)
|
600 |
|
601 |
random_name_btn1.click(
|
602 |
+
fn=None,
|
603 |
inputs=[name_txt1, name_txt2, name_txt3, name_txt4],
|
604 |
outputs=[name_txt1],
|
605 |
+
_js=ui.get_random_name_js
|
606 |
)
|
607 |
|
608 |
random_name_btn2.click(
|
609 |
+
fn=None,
|
610 |
inputs=[name_txt2, name_txt1, name_txt3, name_txt4],
|
611 |
outputs=[name_txt2],
|
612 |
+
_js=ui.get_random_name_js
|
613 |
)
|
614 |
|
615 |
random_name_btn3.click(
|
616 |
+
fn=None,
|
617 |
inputs=[name_txt3, name_txt1, name_txt2, name_txt4],
|
618 |
outputs=[name_txt3],
|
619 |
+
_js=ui.get_random_name_js
|
620 |
)
|
621 |
|
622 |
random_name_btn4.click(
|
623 |
+
fn=None,
|
624 |
inputs=[name_txt4, name_txt1, name_txt2, name_txt3],
|
625 |
outputs=[name_txt4],
|
626 |
+
_js=ui.get_random_name_js
|
627 |
)
|
628 |
|
629 |
### Story generation
|
|
|
751 |
cursors,
|
752 |
action_btn1,
|
753 |
genre_dd, place_dd, mood_dd,
|
754 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
755 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
756 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
757 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
758 |
],
|
759 |
outputs=[
|
760 |
cursors, cur_cursor,
|
|
|
767 |
inputs=[
|
768 |
cursors,
|
769 |
genre_dd, place_dd, mood_dd,
|
770 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
771 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
772 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
773 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
774 |
],
|
775 |
outputs=[
|
776 |
action_btn1, action_btn2, action_btn3, progress_comp
|
|
|
803 |
cursors,
|
804 |
action_btn2,
|
805 |
genre_dd, place_dd, mood_dd,
|
806 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
807 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
808 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
809 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
810 |
],
|
811 |
outputs=[
|
812 |
cursors, cur_cursor,
|
|
|
819 |
inputs=[
|
820 |
cursors,
|
821 |
genre_dd, place_dd, mood_dd,
|
822 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
823 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
824 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
825 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
826 |
],
|
827 |
outputs=[
|
828 |
action_btn1, action_btn2, action_btn3, progress_comp
|
|
|
855 |
cursors,
|
856 |
action_btn3,
|
857 |
genre_dd, place_dd, mood_dd,
|
858 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
859 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
860 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
861 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
862 |
],
|
863 |
outputs=[
|
864 |
cursors, cur_cursor,
|
|
|
871 |
inputs=[
|
872 |
cursors,
|
873 |
genre_dd, place_dd, mood_dd,
|
874 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
875 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
876 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
877 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
878 |
],
|
879 |
outputs=[
|
880 |
action_btn1, action_btn2, action_btn3, progress_comp
|
|
|
907 |
cursors,
|
908 |
custom_action_txt,
|
909 |
genre_dd, place_dd, mood_dd,
|
910 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
911 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
912 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
913 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
914 |
],
|
915 |
outputs=[
|
916 |
cursors, cur_cursor,
|
|
|
923 |
inputs=[
|
924 |
cursors,
|
925 |
genre_dd, place_dd, mood_dd,
|
926 |
+
name_txt1, age_dd1, personality_dd1, job_dd1,
|
927 |
+
side_char_enable_ckb1, name_txt2, age_dd2, personality_dd2, job_dd2,
|
928 |
+
side_char_enable_ckb2, name_txt3, age_dd3, personality_dd3, job_dd3,
|
929 |
+
side_char_enable_ckb3, name_txt4, age_dd4, personality_dd4, job_dd4,
|
930 |
],
|
931 |
outputs=[
|
932 |
action_btn1, action_btn2, action_btn3, progress_comp
|
assets/palm_prompts.toml
CHANGED
@@ -1,3 +1,204 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
[image_gen]
|
2 |
neg_prompt="nsfw, worst quality, low quality, lowres, bad anatomy, bad hands, text, watermark, signature, error, missing fingers, extra digit, fewer digits, cropped, worst quality, normal quality, blurry, username, extra limbs, twins, boring, jpeg artifacts"
|
3 |
|
|
|
1 |
+
[story_gen]
|
2 |
+
add_side_character = """side character #{cur_side_chars}
|
3 |
+
- name: {name},
|
4 |
+
- job: {job},
|
5 |
+
- age: {age},
|
6 |
+
- personality: {personality}
|
7 |
+
|
8 |
+
"""
|
9 |
+
first_story_gen = """Write the first three paragraphs of a novel as much detailed as possible. They should be based on the background information. Blend 5W1H principle into the stories as a plain text. Don't let the paragraphs end the whole story.
|
10 |
+
|
11 |
+
background information:
|
12 |
+
- genre: {genre}
|
13 |
+
- where: {place}
|
14 |
+
- mood: {mood}
|
15 |
+
|
16 |
+
main character
|
17 |
+
- name: {main_char_name}
|
18 |
+
- job: {main_char_job}
|
19 |
+
- age: {main_char_age}
|
20 |
+
- personality: {main_char_personality}
|
21 |
+
{side_char_placeholder}
|
22 |
+
Fill in the following JSON output format:
|
23 |
+
{{"paragraphs":"string"}}
|
24 |
+
|
25 |
+
Write the JSON output:
|
26 |
+
"""
|
27 |
+
next_story_gen = """Write the next paragraphs. The next paragraphs should be determined by an option and well connected to the current stories.
|
28 |
+
|
29 |
+
background information:
|
30 |
+
- genre: {genre}
|
31 |
+
- where: {place}
|
32 |
+
- mood: {mood}
|
33 |
+
|
34 |
+
main character
|
35 |
+
- name: {main_char_name}
|
36 |
+
- job: {main_char_job}
|
37 |
+
- age: {main_char_age}
|
38 |
+
- personality: {main_char_personality}
|
39 |
+
{side_char_placeholder}
|
40 |
+
Fill in the following JSON output format:
|
41 |
+
{{"paragraphs":"string"}}
|
42 |
+
|
43 |
+
stories
|
44 |
+
{stories}
|
45 |
+
|
46 |
+
option to the next stories: {action}
|
47 |
+
|
48 |
+
Write the JSON output:
|
49 |
+
"""
|
50 |
+
actions_gen = """Suggest the 20 options to drive the stories to the next based on the information below.
|
51 |
+
|
52 |
+
background information:
|
53 |
+
- genre: {genre}
|
54 |
+
- where: {place}
|
55 |
+
- mood: {mood}
|
56 |
+
|
57 |
+
main character
|
58 |
+
- name: {main_char_name}
|
59 |
+
- job: {main_char_job}
|
60 |
+
- age: {main_char_age}
|
61 |
+
- personality: {main_char_personality}
|
62 |
+
{side_char_placeholder}
|
63 |
+
Fill in the following JSON output format:
|
64 |
+
{{"options":["string1", "string2", "string3"]}}
|
65 |
+
|
66 |
+
summary of the story
|
67 |
+
{summary}
|
68 |
+
|
69 |
+
Write the JSON output:
|
70 |
+
"""
|
71 |
+
summarize = """Summarize the text below
|
72 |
+
|
73 |
+
{stories}
|
74 |
+
|
75 |
+
"""
|
76 |
+
|
77 |
+
[chat_gen]
|
78 |
+
chapter_title_ctx = """
|
79 |
+
chapter{title_idx} {{
|
80 |
+
title: {chapter_title},
|
81 |
+
plot: {chapter_plot}
|
82 |
+
}}
|
83 |
+
"""
|
84 |
+
add_side_character = """side character{cur_side_chars}: {{
|
85 |
+
name: {name},
|
86 |
+
job: {job},
|
87 |
+
age: {age},
|
88 |
+
personality: {personality}
|
89 |
+
}}
|
90 |
+
|
91 |
+
"""
|
92 |
+
chat_context = """You are a professional writing advisor, especially specialized in developing ideas on plotting stories and creating characters. I provide genre, where, and mood along with the rough description of one main character and three side characters.
|
93 |
+
|
94 |
+
Give creative but not too long responses based on the following information.
|
95 |
+
|
96 |
+
genre: {genre}
|
97 |
+
where: {place}
|
98 |
+
mood: {mood}
|
99 |
+
|
100 |
+
main character: {{
|
101 |
+
name: {name1},
|
102 |
+
job: {job1},
|
103 |
+
age: {age1},
|
104 |
+
personality: {personality1}
|
105 |
+
}}
|
106 |
+
{side_char_placeholder}
|
107 |
+
{chapter_title_placeholder}
|
108 |
+
"""
|
109 |
+
|
110 |
+
[plot_gen]
|
111 |
+
main_plot_gen="""Write a title and an outline of a novel based on the background information below in Ronald Tobias's plot theory. The outline should follow the "rising action", "crisis", "climax", "falling action", and "denouement" plot types. Each should be filled with a VERY detailed and descriptive at least two paragraphs of string. Randomly choose if the story goes optimistic or tragic.
|
112 |
+
|
113 |
+
background information:
|
114 |
+
- genre: string
|
115 |
+
- where: string
|
116 |
+
- mood: string
|
117 |
+
|
118 |
+
main character
|
119 |
+
- name: string
|
120 |
+
- job: string
|
121 |
+
- age: string
|
122 |
+
- personality: string
|
123 |
+
|
124 |
+
JSON output:
|
125 |
+
{{
|
126 |
+
"title": "string",
|
127 |
+
"outline": {{
|
128 |
+
"rising action": "paragraphs of string",
|
129 |
+
"crisis": "paragraphs of string",
|
130 |
+
"climax": "paragraphs of string",
|
131 |
+
"falling action": "paragraphs of string",
|
132 |
+
"denouement": "paragraphs of string"
|
133 |
+
}}
|
134 |
+
}}
|
135 |
+
|
136 |
+
background information:
|
137 |
+
- genre: {genre}
|
138 |
+
- where: {place}
|
139 |
+
- mood: {mood}
|
140 |
+
|
141 |
+
main character
|
142 |
+
- name: {main_char_name}
|
143 |
+
- job: {main_char_job}
|
144 |
+
- age: {main_char_age}
|
145 |
+
- personality: {main_char_personality}
|
146 |
+
{side_char_placeholder}
|
147 |
+
JSON output:
|
148 |
+
"""
|
149 |
+
first_story_gen="""Write the chapter title and the first few paragraphs of the "rising action" plot based on the background information below in Ronald Tobias's plot theory. Also, suggest three choosable actions to drive current story in different directions. The first few paragraphs should be filled with a VERY MUCH detailed and descriptive at least two paragraphs of string.
|
150 |
+
|
151 |
+
REMEMBER the first few paragraphs should not end the whole story and allow leaway for the next paragraphs to come.
|
152 |
+
The whole story SHOULD stick to the "rising action -> crisis -> climax -> falling action -> denouement" flow, so REMEMBER not to write anything mentioned from the next plots of crisis, climax, falling action, and denouement yet.
|
153 |
+
|
154 |
+
background information:
|
155 |
+
- genre: string
|
156 |
+
- where: string
|
157 |
+
- mood: string
|
158 |
+
|
159 |
+
main character
|
160 |
+
- name: string
|
161 |
+
- job: string
|
162 |
+
- age: string
|
163 |
+
- personality: string
|
164 |
+
|
165 |
+
overall outline
|
166 |
+
- title: string
|
167 |
+
- rising action: string
|
168 |
+
- crisis: string
|
169 |
+
- climax: string
|
170 |
+
- falling action: string
|
171 |
+
- denouement: string
|
172 |
+
|
173 |
+
JSON output:
|
174 |
+
{{
|
175 |
+
"chapter_title": "string",
|
176 |
+
"paragraphs": ["string", "string", ...],
|
177 |
+
"actions": ["string", "string", "string"]
|
178 |
+
}}
|
179 |
+
|
180 |
+
background information:
|
181 |
+
- genre: {genre}
|
182 |
+
- where: {place}
|
183 |
+
- mood: {mood}
|
184 |
+
|
185 |
+
main character
|
186 |
+
- name: {main_char_name}
|
187 |
+
- job: {main_char_job}
|
188 |
+
- age: {main_char_age}
|
189 |
+
- personality: {main_char_personality}
|
190 |
+
{side_char_placeholder}
|
191 |
+
overall outline
|
192 |
+
- title: {title}
|
193 |
+
- rising action: {rising_action}
|
194 |
+
- crisis: {crisis}
|
195 |
+
- climax: {climax}
|
196 |
+
- falling action: {falling_action}
|
197 |
+
- denouement: {denouement}
|
198 |
+
|
199 |
+
JSON output:
|
200 |
+
"""
|
201 |
+
|
202 |
[image_gen]
|
203 |
neg_prompt="nsfw, worst quality, low quality, lowres, bad anatomy, bad hands, text, watermark, signature, error, missing fingers, extra digit, fewer digits, cropped, worst quality, normal quality, blurry, username, extra limbs, twins, boring, jpeg artifacts"
|
204 |
|
constants/desc.py
CHANGED
@@ -8,7 +8,7 @@ background_setup_phase_description = """
|
|
8 |
In this phase, users can setup the genre, place, and mood of the story. Especially, genre is the key that others are depending on.
|
9 |
"""
|
10 |
character_setup_phase_description = """
|
11 |
-
In this phase, users can setup characters up to four. For each character, users can decide their characteristics and basic information such as name, age
|
12 |
|
13 |
PaLM API translates the given character information into a list of keywords that Stable Diffusion could effectively understands. Then, Stable Diffusion generates images using the keywords as a prompt.
|
14 |
"""
|
|
|
8 |
In this phase, users can setup the genre, place, and mood of the story. Especially, genre is the key that others are depending on.
|
9 |
"""
|
10 |
character_setup_phase_description = """
|
11 |
+
In this phase, users can setup characters up to four. For each character, users can decide their characteristics and basic information such as name, age and personality. Also, the image of each character could be generated based on the information using Stable Diffusion.
|
12 |
|
13 |
PaLM API translates the given character information into a list of keywords that Stable Diffusion could effectively understands. Then, Stable Diffusion generates images using the keywords as a prompt.
|
14 |
"""
|
constants/init_values.py
CHANGED
@@ -40,7 +40,6 @@ jobs = {
|
|
40 |
}
|
41 |
|
42 |
ages = ["10s", "20s", "30s", "40s", "50s"]
|
43 |
-
mbtis = ["ESTJ", "ENTJ", "ESFJ", "ENFJ", "ISTJ", "ISFJ", "INTJ", "INFJ", "ESTP", "ESFP", "ENTP", "ENFP", "ISTP", "ISFP", "INTP", "INFP"]
|
44 |
random_names = ["Aaron", "Abigail", "Adam", "Adrian", "Alan", "Alexandra", "Alyssa", "Amanda", "Amber", "Amy", "Andrea", "Andrew", "Angela", "Angelina", "Anthony", "Antonio", "Ashley", "Austin", "Benjamin", "Brandon", "Brian", "Brittany", "Brooke", "Bruce", "Bryan", "Caleb", "Cameron", "Carol", "Caroline", "Catherine", "Charles", "Charlotte", "Chase", "Chelsea", "Christopher", "Cody", "Colin", "Connor", "Cooper", "Corey", "Cristian", "Daniel", "David", "Deborah", "Denise", "Dennis", "Derek", "Diana", "Dorothy", "Douglas", "Dylan", "Edward", "Elizabeth", "Emily", "Emma", "Eric", "Ethan", "Evan", "Gabriel", "Gavin", "George", "Gina", "Grace", "Gregory", "Hannah", "Harrison", "Hayden", "Heather", "Helen", "Henry", "Holly", "Hope", "Hunter", "Ian", "Isaac", "Isabella", "Jack", "Jacob", "James", "Jason", "Jeffrey", "Jenna", "Jennifer", "Jessica", "Jesse", "Joan", "John", "Jonathan", "Joseph", "Joshua", "Justin", "Kayla", "Kevin", "Kimberly", "Kyle", "Laura", "Lauren", "Lawrence", "Leah", "Leo", "Leslie", "Levi", "Lewis", "Liam", "Logan", "Lucas", "Lucy", "Luis", "Luke", "Madison", "Maegan", "Maria", "Mark", "Matthew", "Megan", "Michael", "Michelle", "Molly", "Morgan", "Nathan", "Nathaniel", "Nicholas", "Nicole", "Noah", "Olivia", "Owen", "Paige", "Parker", "Patrick", "Paul", "Peter", "Philip", "Phoebe", "Rachel", "Randy", "Rebecca", "Richard", "Robert", "Roger", "Ronald", "Rose", "Russell", "Ryan", "Samantha", "Samuel", "Sandra", "Sarah", "Scott", "Sean", "Sebastian", "Seth", "Shannon", "Shawn", "Shelby", "Sierra", "Simon", "Sophia", "Stephanie", "Stephen", "Steven", "Sue", "Susan", "Sydney", "Taylor", "Teresa", "Thomas", "Tiffany", "Timothy", "Todd", "Tom", "Tommy", "Tracy", "Travis", "Tyler", "Victoria", "Vincent", "Violet", "Warren", "William", "Zach", "Zachary", "Zoe"]
|
45 |
personalities = ['Optimistic', 'Kind', 'Resilient', 'Generous', 'Humorous', 'Creative', 'Empathetic', 'Ambitious', 'Adventurous']
|
46 |
|
|
|
40 |
}
|
41 |
|
42 |
ages = ["10s", "20s", "30s", "40s", "50s"]
|
|
|
43 |
random_names = ["Aaron", "Abigail", "Adam", "Adrian", "Alan", "Alexandra", "Alyssa", "Amanda", "Amber", "Amy", "Andrea", "Andrew", "Angela", "Angelina", "Anthony", "Antonio", "Ashley", "Austin", "Benjamin", "Brandon", "Brian", "Brittany", "Brooke", "Bruce", "Bryan", "Caleb", "Cameron", "Carol", "Caroline", "Catherine", "Charles", "Charlotte", "Chase", "Chelsea", "Christopher", "Cody", "Colin", "Connor", "Cooper", "Corey", "Cristian", "Daniel", "David", "Deborah", "Denise", "Dennis", "Derek", "Diana", "Dorothy", "Douglas", "Dylan", "Edward", "Elizabeth", "Emily", "Emma", "Eric", "Ethan", "Evan", "Gabriel", "Gavin", "George", "Gina", "Grace", "Gregory", "Hannah", "Harrison", "Hayden", "Heather", "Helen", "Henry", "Holly", "Hope", "Hunter", "Ian", "Isaac", "Isabella", "Jack", "Jacob", "James", "Jason", "Jeffrey", "Jenna", "Jennifer", "Jessica", "Jesse", "Joan", "John", "Jonathan", "Joseph", "Joshua", "Justin", "Kayla", "Kevin", "Kimberly", "Kyle", "Laura", "Lauren", "Lawrence", "Leah", "Leo", "Leslie", "Levi", "Lewis", "Liam", "Logan", "Lucas", "Lucy", "Luis", "Luke", "Madison", "Maegan", "Maria", "Mark", "Matthew", "Megan", "Michael", "Michelle", "Molly", "Morgan", "Nathan", "Nathaniel", "Nicholas", "Nicole", "Noah", "Olivia", "Owen", "Paige", "Parker", "Patrick", "Paul", "Peter", "Philip", "Phoebe", "Rachel", "Randy", "Rebecca", "Richard", "Robert", "Roger", "Ronald", "Rose", "Russell", "Ryan", "Samantha", "Samuel", "Sandra", "Sarah", "Scott", "Sean", "Sebastian", "Seth", "Shannon", "Shawn", "Shelby", "Sierra", "Simon", "Sophia", "Stephanie", "Stephen", "Steven", "Sue", "Susan", "Sydney", "Taylor", "Teresa", "Thomas", "Tiffany", "Timothy", "Todd", "Tom", "Tommy", "Tracy", "Travis", "Tyler", "Victoria", "Vincent", "Violet", "Warren", "William", "Zach", "Zachary", "Zoe"]
|
44 |
personalities = ['Optimistic', 'Kind', 'Resilient', 'Generous', 'Humorous', 'Creative', 'Empathetic', 'Ambitious', 'Adventurous']
|
45 |
|
interfaces/chat_ui.py
CHANGED
@@ -1,89 +1,77 @@
|
|
1 |
import gradio as gr
|
2 |
|
3 |
from interfaces import utils
|
4 |
-
from modules import
|
|
|
|
|
5 |
|
6 |
from pingpong import PingPong
|
7 |
|
8 |
def rollback_last_ui(history):
|
9 |
return history[:-1]
|
10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
async def chat(
|
12 |
user_input, chat_mode, chat_state,
|
13 |
genre, place, mood,
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
chapter1_title, chapter2_title, chapter3_title, chapter4_title,
|
19 |
-
chapter1_plot, chapter2_plot, chapter3_plot, chapter4_plot
|
|
|
20 |
):
|
21 |
-
chapter_title_ctx =
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
plot: {chapter4_plot}
|
42 |
-
}}
|
43 |
-
"""
|
44 |
-
|
45 |
-
ctx = f"""You are a professional writing advisor, especially specialized in developing ideas on plotting stories and creating characters. I provide genre, where, and mood along with the rough description of one main character and three side characters.
|
46 |
-
|
47 |
-
Give creative but not too long responses based on the following information.
|
48 |
-
|
49 |
-
genre: {genre}
|
50 |
-
where: {place}
|
51 |
-
mood: {mood}
|
52 |
-
|
53 |
-
main character: {{
|
54 |
-
name: {name1},
|
55 |
-
job: {job1},
|
56 |
-
age: {age1},
|
57 |
-
mbti: {mbti1},
|
58 |
-
personality: {personality1}
|
59 |
-
}}
|
60 |
-
|
61 |
-
side character1: {{
|
62 |
-
name: {name2},
|
63 |
-
job: {job2},
|
64 |
-
age: {age2},
|
65 |
-
mbti: {mbti2},
|
66 |
-
personality: {personality2}
|
67 |
-
}}
|
68 |
-
|
69 |
-
side character2: {{
|
70 |
-
name: {name3},
|
71 |
-
job: {job3},
|
72 |
-
age: {age3},
|
73 |
-
mbti: {mbti3},
|
74 |
-
personality: {personality3}
|
75 |
-
}}
|
76 |
-
|
77 |
-
side character3: {{
|
78 |
-
name: {name4},
|
79 |
-
job: {job4},
|
80 |
-
age: {age4},
|
81 |
-
mbti: {mbti4},
|
82 |
-
personality: {personality4}
|
83 |
-
}}
|
84 |
-
|
85 |
-
{chapter_title_ctx}
|
86 |
-
"""
|
87 |
|
88 |
ppm = chat_state[chat_mode]
|
89 |
ppm.ctx = ctx
|
|
|
1 |
import gradio as gr
|
2 |
|
3 |
from interfaces import utils
|
4 |
+
from modules import (
|
5 |
+
palmchat, palm_prompts,
|
6 |
+
)
|
7 |
|
8 |
from pingpong import PingPong
|
9 |
|
10 |
def rollback_last_ui(history):
|
11 |
return history[:-1]
|
12 |
|
13 |
+
|
14 |
+
def add_side_character(enable, name, age, personality, job):
|
15 |
+
cur_side_chars = 1
|
16 |
+
prompt = ""
|
17 |
+
for idx in range(len(enable)):
|
18 |
+
if enable[idx]:
|
19 |
+
prompt += palm_prompts['chat_gen']['add_side_character'].format(
|
20 |
+
cur_side_chars=cur_side_chars,
|
21 |
+
name=name[idx],
|
22 |
+
job=job[idx],
|
23 |
+
age=age[idx],
|
24 |
+
personality=personality[idx]
|
25 |
+
)
|
26 |
+
cur_side_chars += 1
|
27 |
+
return "\n" + prompt if prompt else ""
|
28 |
+
|
29 |
+
|
30 |
+
def add_chapter_title_ctx(chapter_title, chapter_plot):
|
31 |
+
title_idx = 1
|
32 |
+
prompt = ""
|
33 |
+
for idx in range(len(chapter_title)):
|
34 |
+
if chapter_title[idx] :
|
35 |
+
prompt += palm_prompts['chat_gen']['chapter_title_ctx'].format(
|
36 |
+
title_idx=title_idx,
|
37 |
+
chapter_title=chapter_title[idx],
|
38 |
+
chapter_plot=chapter_plot[idx],
|
39 |
+
)
|
40 |
+
title_idx += 1
|
41 |
+
return "\n" + prompt if prompt else ""
|
42 |
+
|
43 |
+
|
44 |
async def chat(
|
45 |
user_input, chat_mode, chat_state,
|
46 |
genre, place, mood,
|
47 |
+
main_char_name, main_char_age, main_char_personality, main_char_job,
|
48 |
+
side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
|
49 |
+
side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
|
50 |
+
side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
|
51 |
chapter1_title, chapter2_title, chapter3_title, chapter4_title,
|
52 |
+
chapter1_plot, chapter2_plot, chapter3_plot, chapter4_plot,
|
53 |
+
side_char_enable1, side_char_enable2, side_char_enable3,
|
54 |
):
|
55 |
+
chapter_title_ctx = add_chapter_title_ctx(
|
56 |
+
[chapter1_title, chapter2_title, chapter3_title, chapter4_title],
|
57 |
+
[chapter1_plot, chapter2_plot, chapter3_plot, chapter4_plot],
|
58 |
+
)
|
59 |
+
side_char_prompt = add_side_character(
|
60 |
+
[side_char_enable1, side_char_enable2, side_char_enable3],
|
61 |
+
[side_char_name1, side_char_name2, side_char_name3],
|
62 |
+
[side_char_job1, side_char_job2, side_char_job3],
|
63 |
+
[side_char_age1, side_char_age2, side_char_age3],
|
64 |
+
[side_char_personality1, side_char_personality2, side_char_personality3],
|
65 |
+
)
|
66 |
+
prompt = palm_prompts['chat_gen']['chat_context'].format(
|
67 |
+
genre=genre, place=place, mood=mood,
|
68 |
+
main_char_name=main_char_name,
|
69 |
+
main_char_job=main_char_job,
|
70 |
+
main_char_age=main_char_age,
|
71 |
+
main_char_personality=main_char_personality,
|
72 |
+
side_char_placeholder=side_char_prompt,
|
73 |
+
chapter_title_placeholder=chapter_title_ctx,
|
74 |
+
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
ppm = chat_state[chat_mode]
|
77 |
ppm.ctx = ctx
|
interfaces/export_ui.py
CHANGED
@@ -29,10 +29,10 @@ title: """
|
|
29 |
|
30 |
def export(
|
31 |
title, cursors,
|
32 |
-
main_char_img, main_char_name, main_char_age,
|
33 |
-
side_char_enable1, side_char_img1, side_char_name1, side_char_age1,
|
34 |
-
side_char_enable2, side_char_img2, side_char_name2, side_char_age2,
|
35 |
-
side_char_enable3, side_char_img3, side_char_name3, side_char_age3,
|
36 |
):
|
37 |
print(main_char_img)
|
38 |
characters = [
|
@@ -42,13 +42,13 @@ def export(
|
|
42 |
}
|
43 |
]
|
44 |
utils.add_side_character_to_export(
|
45 |
-
characters, side_char_enable1, side_char_img1, side_char_name1, side_char_age1,
|
46 |
)
|
47 |
utils.add_side_character_to_export(
|
48 |
-
characters, side_char_enable2, side_char_img2, side_char_name2, side_char_age2,
|
49 |
)
|
50 |
utils.add_side_character_to_export(
|
51 |
-
characters, side_char_enable3, side_char_img3, side_char_name3, side_char_age3,
|
52 |
)
|
53 |
|
54 |
html_as_string = parser.gen_from_file(
|
|
|
29 |
|
30 |
def export(
|
31 |
title, cursors,
|
32 |
+
main_char_img, main_char_name, main_char_age, main_char_personality, main_char_job,
|
33 |
+
side_char_enable1, side_char_img1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
|
34 |
+
side_char_enable2, side_char_img2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
|
35 |
+
side_char_enable3, side_char_img3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
|
36 |
):
|
37 |
print(main_char_img)
|
38 |
characters = [
|
|
|
42 |
}
|
43 |
]
|
44 |
utils.add_side_character_to_export(
|
45 |
+
characters, side_char_enable1, side_char_img1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1
|
46 |
)
|
47 |
utils.add_side_character_to_export(
|
48 |
+
characters, side_char_enable2, side_char_img2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2
|
49 |
)
|
50 |
utils.add_side_character_to_export(
|
51 |
+
characters, side_char_enable3, side_char_img3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3
|
52 |
)
|
53 |
|
54 |
html_as_string = parser.gen_from_file(
|
interfaces/plot_gen_ui.py
CHANGED
@@ -1,91 +1,37 @@
|
|
1 |
import re
|
2 |
import gradio as gr
|
3 |
from interfaces import utils
|
4 |
-
from modules import
|
|
|
|
|
|
|
5 |
|
6 |
-
|
7 |
-
enable, prompt, cur_side_chars,
|
8 |
-
name, age, mbti, personality, job
|
9 |
-
):
|
10 |
-
if enable:
|
11 |
-
prompt = prompt + f"""
|
12 |
-
side character #{cur_side_chars}
|
13 |
-
- name: {name},
|
14 |
-
- job: {job},
|
15 |
-
- age: {age},
|
16 |
-
- mbti: {mbti},
|
17 |
-
- personality: {personality}
|
18 |
-
|
19 |
-
"""
|
20 |
-
cur_side_chars = cur_side_chars + 1
|
21 |
-
|
22 |
-
return prompt, cur_side_chars
|
23 |
-
|
24 |
|
25 |
async def plot_gen(
|
26 |
temperature,
|
27 |
genre, place, mood,
|
28 |
side_char_enable1, side_char_enable2, side_char_enable3,
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
):
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
- mood: string
|
41 |
-
|
42 |
-
main character
|
43 |
-
- name: string
|
44 |
-
- job: string
|
45 |
-
- age: string
|
46 |
-
- mbti: string
|
47 |
-
- personality: string
|
48 |
-
|
49 |
-
JSON output:
|
50 |
-
{{
|
51 |
-
"title": "string",
|
52 |
-
"outline": {{
|
53 |
-
"rising action": "paragraphs of string",
|
54 |
-
"crisis": "paragraphs of string",
|
55 |
-
"climax": "paragraphs of string",
|
56 |
-
"falling action": "paragraphs of string",
|
57 |
-
"denouement": "paragraphs of string"
|
58 |
-
}}
|
59 |
-
}}
|
60 |
-
|
61 |
-
background information:
|
62 |
-
- genre: {genre}
|
63 |
-
- where: {place}
|
64 |
-
- mood: {mood}
|
65 |
-
|
66 |
-
main character
|
67 |
-
- name: {name1}
|
68 |
-
- job: {job1}
|
69 |
-
- age: {age1}
|
70 |
-
- mbti: {mbti1}
|
71 |
-
- personality: {personality1}
|
72 |
-
|
73 |
-
"""
|
74 |
-
|
75 |
-
prompt, cur_side_chars = _add_side_character(
|
76 |
-
side_char_enable1, prompt, cur_side_chars,
|
77 |
-
name2, job2, age2, mbti2, personality2
|
78 |
)
|
79 |
-
prompt
|
80 |
-
|
81 |
-
|
|
|
|
|
|
|
|
|
82 |
)
|
83 |
-
prompt, cur_side_chars = _add_side_character(
|
84 |
-
side_char_enable3, prompt, cur_side_chars,
|
85 |
-
name4, job4, age4, mbti4, personality4
|
86 |
-
)
|
87 |
-
|
88 |
-
prompt = prompt + "JSON output:\n"
|
89 |
|
90 |
print(f"generated prompt:\n{prompt}")
|
91 |
parameters = {
|
@@ -95,7 +41,7 @@ main character
|
|
95 |
'top_k': 40,
|
96 |
'top_p': 1,
|
97 |
'max_output_tokens': 4096,
|
98 |
-
}
|
99 |
response_json = await utils.retry_until_valid_json(prompt, parameters=parameters)
|
100 |
|
101 |
return (
|
@@ -114,84 +60,33 @@ async def first_story_gen(
|
|
114 |
rising_action, crisis, climax, falling_action, denouement,
|
115 |
genre, place, mood,
|
116 |
side_char_enable1, side_char_enable2, side_char_enable3,
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
cursors, cur_cursor
|
122 |
):
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
background information:
|
131 |
-
- genre: string
|
132 |
-
- where: string
|
133 |
-
- mood: string
|
134 |
-
|
135 |
-
main character
|
136 |
-
- name: string
|
137 |
-
- job: string
|
138 |
-
- age: string
|
139 |
-
- mbti: string
|
140 |
-
- personality: string
|
141 |
-
|
142 |
-
overall outline
|
143 |
-
- title: string
|
144 |
-
- rising action: string
|
145 |
-
- crisis: string
|
146 |
-
- climax: string
|
147 |
-
- falling action: string
|
148 |
-
- denouement: string
|
149 |
-
|
150 |
-
JSON output:
|
151 |
-
{{
|
152 |
-
"chapter_title": "string",
|
153 |
-
"paragraphs": ["string", "string", ...],
|
154 |
-
"actions": ["string", "string", "string"]
|
155 |
-
}}
|
156 |
-
|
157 |
-
background information:
|
158 |
-
- genre: {genre}
|
159 |
-
- where: {place}
|
160 |
-
- mood: {mood}
|
161 |
-
|
162 |
-
main character
|
163 |
-
- name: {name1}
|
164 |
-
- job: {job1},
|
165 |
-
- age: {age1},
|
166 |
-
- mbti: {mbti1},
|
167 |
-
- personality: {personality1}
|
168 |
-
|
169 |
-
"""
|
170 |
-
|
171 |
-
prompt, cur_side_chars = _add_side_character(
|
172 |
-
side_char_enable1, prompt, cur_side_chars,
|
173 |
-
name2, job2, age2, mbti2, personality2
|
174 |
)
|
175 |
-
prompt
|
176 |
-
|
177 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
)
|
179 |
-
prompt, cur_side_chars = _add_side_character(
|
180 |
-
side_char_enable3, prompt, cur_side_chars,
|
181 |
-
name4, job4, age4, mbti4, personality4
|
182 |
-
)
|
183 |
-
|
184 |
-
prompt = prompt + f"""
|
185 |
-
overall outline
|
186 |
-
- title: {title}
|
187 |
-
- rising action: {rising_action}
|
188 |
-
- crisis: {crisis}
|
189 |
-
- climax: {climax}
|
190 |
-
- falling action: {falling_action}
|
191 |
-
- denouement: {denouement}
|
192 |
-
|
193 |
-
JSON output:
|
194 |
-
"""
|
195 |
|
196 |
print(f"generated prompt:\n{prompt}")
|
197 |
parameters = {
|
@@ -201,7 +96,7 @@ JSON output:
|
|
201 |
'top_k': 40,
|
202 |
'top_p': 1,
|
203 |
'max_output_tokens': 4096,
|
204 |
-
}
|
205 |
response_json = await utils.retry_until_valid_json(prompt, parameters=parameters)
|
206 |
|
207 |
chapter_title = response_json["chapter_title"]
|
|
|
1 |
import re
|
2 |
import gradio as gr
|
3 |
from interfaces import utils
|
4 |
+
from modules import(
|
5 |
+
palmchat,
|
6 |
+
palm_prompts,
|
7 |
+
)
|
8 |
|
9 |
+
print(palm_prompts)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
async def plot_gen(
|
12 |
temperature,
|
13 |
genre, place, mood,
|
14 |
side_char_enable1, side_char_enable2, side_char_enable3,
|
15 |
+
main_char_name, main_char_age, main_char_personality, main_char_job,
|
16 |
+
side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
|
17 |
+
side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
|
18 |
+
side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
|
19 |
):
|
20 |
+
side_char_prompt = utils.add_side_character(
|
21 |
+
[side_char_enable1, side_char_enable2, side_char_enable3],
|
22 |
+
[side_char_name1, side_char_name2, side_char_name3],
|
23 |
+
[side_char_job1, side_char_job2, side_char_job3],
|
24 |
+
[side_char_age1, side_char_age2, side_char_age3],
|
25 |
+
[side_char_personality1, side_char_personality2, side_char_personality3],
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
)
|
27 |
+
prompt = palm_prompts['plot_gen']['main_plot_gen'].format(
|
28 |
+
genre=genre, place=place, mood=mood,
|
29 |
+
main_char_name=main_char_name,
|
30 |
+
main_char_job=main_char_job,
|
31 |
+
main_char_age=main_char_age,
|
32 |
+
main_char_personality=main_char_personality,
|
33 |
+
side_char_placeholder=side_char_prompt,
|
34 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
print(f"generated prompt:\n{prompt}")
|
37 |
parameters = {
|
|
|
41 |
'top_k': 40,
|
42 |
'top_p': 1,
|
43 |
'max_output_tokens': 4096,
|
44 |
+
}
|
45 |
response_json = await utils.retry_until_valid_json(prompt, parameters=parameters)
|
46 |
|
47 |
return (
|
|
|
60 |
rising_action, crisis, climax, falling_action, denouement,
|
61 |
genre, place, mood,
|
62 |
side_char_enable1, side_char_enable2, side_char_enable3,
|
63 |
+
main_char_name, main_char_age, main_char_personality, main_char_job,
|
64 |
+
side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
|
65 |
+
side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
|
66 |
+
side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
|
67 |
cursors, cur_cursor
|
68 |
):
|
69 |
+
side_char_prompt = utils.add_side_character(
|
70 |
+
[side_char_enable1, side_char_enable2, side_char_enable3],
|
71 |
+
[side_char_name1, side_char_name2, side_char_name3],
|
72 |
+
[side_char_job1, side_char_job2, side_char_job3],
|
73 |
+
[side_char_age1, side_char_age2, side_char_age3],
|
74 |
+
[side_char_personality1, side_char_personality2, side_char_personality3],
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
)
|
76 |
+
prompt = palm_prompts['plot_gen']['first_story_gen'].format(
|
77 |
+
genre=genre, place=place, mood=mood,
|
78 |
+
main_char_name=main_char_name,
|
79 |
+
main_char_job=main_char_job,
|
80 |
+
main_char_age=main_char_age,
|
81 |
+
main_char_personality=main_char_personality,
|
82 |
+
side_char_placeholder=side_char_prompt,
|
83 |
+
title=title,
|
84 |
+
rising_action=rising_action,
|
85 |
+
crisis=crisis,
|
86 |
+
climax=climax,
|
87 |
+
falling_action=falling_action,
|
88 |
+
denouement=denouement,
|
89 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
90 |
|
91 |
print(f"generated prompt:\n{prompt}")
|
92 |
parameters = {
|
|
|
96 |
'top_k': 40,
|
97 |
'top_p': 1,
|
98 |
'max_output_tokens': 4096,
|
99 |
+
}
|
100 |
response_json = await utils.retry_until_valid_json(prompt, parameters=parameters)
|
101 |
|
102 |
chapter_title = response_json["chapter_title"]
|
interfaces/story_gen_ui.py
CHANGED
@@ -6,38 +6,44 @@ from gradio_client import Client
|
|
6 |
from pathlib import Path
|
7 |
|
8 |
from modules import (
|
9 |
-
ImageMaker, MusicMaker, palmchat, merge_video
|
10 |
)
|
11 |
from interfaces import utils
|
12 |
|
13 |
from pingpong import PingPong
|
14 |
from pingpong.context import CtxLastWindowStrategy
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
#img_maker = ImageMaker('fantasyworldFp16.safetensors', vae="cute20vae.safetensors")
|
20 |
-
#img_maker = ImageMaker('forgesagalandscapemi.safetensors', vae="anythingFp16.safetensors")
|
21 |
bgm_maker = MusicMaker(model_size='small', output_format='mp3')
|
22 |
|
23 |
-
video_gen_client_url = "https://0447df3cf5f7c49c46.gradio.live"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
async def update_story_gen(
|
26 |
cursors, cur_cursor_idx,
|
27 |
genre, place, mood,
|
28 |
-
main_char_name, main_char_age,
|
29 |
-
side_char_enable1, side_char_name1, side_char_age1,
|
30 |
-
side_char_enable2, side_char_name2, side_char_age2,
|
31 |
-
side_char_enable3, side_char_name3, side_char_age3,
|
32 |
):
|
33 |
if len(cursors) == 1:
|
34 |
return await first_story_gen(
|
35 |
cursors,
|
36 |
genre, place, mood,
|
37 |
-
main_char_name, main_char_age,
|
38 |
-
side_char_enable1, side_char_name1, side_char_age1,
|
39 |
-
side_char_enable2, side_char_name2, side_char_age2,
|
40 |
-
side_char_enable3, side_char_name3, side_char_age3,
|
41 |
cur_cursor_idx=cur_cursor_idx
|
42 |
)
|
43 |
else:
|
@@ -45,10 +51,10 @@ async def update_story_gen(
|
|
45 |
cursors,
|
46 |
None,
|
47 |
genre, place, mood,
|
48 |
-
main_char_name, main_char_age,
|
49 |
-
side_char_enable1, side_char_name1, side_char_age1,
|
50 |
-
side_char_enable2, side_char_name2, side_char_age2,
|
51 |
-
side_char_enable3, side_char_name3, side_char_age3,
|
52 |
cur_cursor_idx=cur_cursor_idx
|
53 |
)
|
54 |
|
@@ -56,10 +62,10 @@ async def next_story_gen(
|
|
56 |
cursors,
|
57 |
action,
|
58 |
genre, place, mood,
|
59 |
-
main_char_name, main_char_age,
|
60 |
-
side_char_enable1, side_char_name1, side_char_age1,
|
61 |
-
side_char_enable2, side_char_name2, side_char_age2,
|
62 |
-
side_char_enable3, side_char_name3, side_char_age3,
|
63 |
cur_cursor_idx=None
|
64 |
):
|
65 |
stories = ""
|
@@ -71,46 +77,23 @@ async def next_story_gen(
|
|
71 |
for cursor in cursors[:end_idx]:
|
72 |
stories = stories + cursor["story"]
|
73 |
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
main character
|
82 |
-
- name: {main_char_name}
|
83 |
-
- job: {main_char_job}
|
84 |
-
- age: {main_char_age}
|
85 |
-
- mbti: {main_char_mbti}
|
86 |
-
- personality: {main_char_personality}
|
87 |
-
"""
|
88 |
-
|
89 |
-
prompt, cur_side_chars = utils.add_side_character(
|
90 |
-
side_char_enable1, prompt, cur_side_chars,
|
91 |
-
side_char_name1, side_char_job1, side_char_age1, side_char_mbti1, side_char_personality1
|
92 |
-
)
|
93 |
-
prompt, cur_side_chars = utils.add_side_character(
|
94 |
-
side_char_enable2, prompt, cur_side_chars,
|
95 |
-
side_char_name2, side_char_job2, side_char_age2, side_char_mbti2, side_char_personality2
|
96 |
)
|
97 |
-
prompt, cur_side_chars = utils.add_side_character(
|
98 |
-
side_char_enable3, prompt, cur_side_chars,
|
99 |
-
side_char_name3, side_char_job3, side_char_age3, side_char_mbti3, side_char_personality3
|
100 |
-
)
|
101 |
-
|
102 |
-
prompt = prompt + f"""
|
103 |
-
stories
|
104 |
-
{stories}
|
105 |
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
|
|
114 |
|
115 |
print(f"generated prompt:\n{prompt}")
|
116 |
parameters = {
|
@@ -120,8 +103,13 @@ Fill in the following JSON output format:
|
|
120 |
'top_k': 40,
|
121 |
'top_p': 1,
|
122 |
'max_output_tokens': 4096,
|
123 |
-
|
124 |
-
|
|
|
|
|
|
|
|
|
|
|
125 |
|
126 |
story = response_json["paragraphs"]
|
127 |
if isinstance(story, list):
|
@@ -155,10 +143,10 @@ Fill in the following JSON output format:
|
|
155 |
async def actions_gen(
|
156 |
cursors,
|
157 |
genre, place, mood,
|
158 |
-
main_char_name, main_char_age,
|
159 |
-
side_char_enable1, side_char_name1, side_char_age1,
|
160 |
-
side_char_enable2, side_char_name2, side_char_age2,
|
161 |
-
side_char_enable3, side_char_name3, side_char_age3,
|
162 |
cur_cursor_idx=None
|
163 |
):
|
164 |
stories = ""
|
@@ -168,11 +156,8 @@ async def actions_gen(
|
|
168 |
for cursor in cursors[:end_idx]:
|
169 |
stories = stories + cursor["story"]
|
170 |
|
171 |
-
summary_prompt =
|
172 |
-
|
173 |
-
{stories}
|
174 |
|
175 |
-
"""
|
176 |
print(f"generated prompt:\n{summary_prompt}")
|
177 |
parameters = {
|
178 |
'model': 'models/text-bison-001',
|
@@ -181,47 +166,32 @@ async def actions_gen(
|
|
181 |
'top_k': 40,
|
182 |
'top_p': 1,
|
183 |
'max_output_tokens': 4096,
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
|
197 |
-
|
198 |
-
|
199 |
-
- personality: {main_char_personality}
|
200 |
-
"""
|
201 |
-
prompt, cur_side_chars = utils.add_side_character(
|
202 |
-
side_char_enable1, prompt, cur_side_chars,
|
203 |
-
side_char_name1, side_char_job1, side_char_age1, side_char_mbti1, side_char_personality1
|
204 |
-
)
|
205 |
-
prompt, cur_side_chars = utils.add_side_character(
|
206 |
-
side_char_enable2, prompt, cur_side_chars,
|
207 |
-
side_char_name2, side_char_job2, side_char_age2, side_char_mbti2, side_char_personality2
|
208 |
)
|
209 |
-
prompt
|
210 |
-
|
211 |
-
|
|
|
|
|
|
|
|
|
|
|
212 |
)
|
213 |
|
214 |
-
prompt = prompt + f"""
|
215 |
-
summary of the story
|
216 |
-
{summary}
|
217 |
-
|
218 |
-
Fill in the following JSON output format:
|
219 |
-
{{
|
220 |
-
"options": ["string", "string", "string", ...]
|
221 |
-
}}
|
222 |
-
|
223 |
-
"""
|
224 |
-
|
225 |
print(f"generated prompt:\n{prompt}")
|
226 |
parameters = {
|
227 |
'model': 'models/text-bison-001',
|
@@ -230,8 +200,13 @@ Fill in the following JSON output format:
|
|
230 |
'top_k': 40,
|
231 |
'top_p': 1,
|
232 |
'max_output_tokens': 4096,
|
233 |
-
|
234 |
-
|
|
|
|
|
|
|
|
|
|
|
235 |
actions = response_json["options"]
|
236 |
|
237 |
random_actions = random.sample(actions, 3)
|
@@ -246,49 +221,29 @@ Fill in the following JSON output format:
|
|
246 |
async def first_story_gen(
|
247 |
cursors,
|
248 |
genre, place, mood,
|
249 |
-
main_char_name, main_char_age,
|
250 |
-
side_char_enable1, side_char_name1, side_char_age1,
|
251 |
-
side_char_enable2, side_char_name2, side_char_age2,
|
252 |
-
side_char_enable3, side_char_name3, side_char_age3,
|
253 |
cur_cursor_idx=None
|
254 |
):
|
255 |
cur_side_chars = 1
|
256 |
|
257 |
-
|
258 |
-
|
259 |
-
|
260 |
-
|
261 |
-
|
262 |
-
|
263 |
-
|
264 |
-
main character
|
265 |
-
- name: {main_char_name}
|
266 |
-
- job: {main_char_job}
|
267 |
-
- age: {main_char_age}
|
268 |
-
- mbti: {main_char_mbti}
|
269 |
-
- personality: {main_char_personality}
|
270 |
-
"""
|
271 |
-
|
272 |
-
prompt, cur_side_chars = utils.add_side_character(
|
273 |
-
side_char_enable1, prompt, cur_side_chars,
|
274 |
-
side_char_name1, side_char_job1, side_char_age1, side_char_mbti1, side_char_personality1
|
275 |
)
|
276 |
-
prompt
|
277 |
-
|
278 |
-
|
|
|
|
|
|
|
|
|
279 |
)
|
280 |
-
prompt, cur_side_chars = utils.add_side_character(
|
281 |
-
side_char_enable3, prompt, cur_side_chars,
|
282 |
-
side_char_name3, side_char_job3, side_char_age3, side_char_mbti3, side_char_personality3
|
283 |
-
)
|
284 |
-
|
285 |
-
prompt = prompt + f"""
|
286 |
-
Fill in the following JSON output format:
|
287 |
-
{{
|
288 |
-
"paragraphs": "string"
|
289 |
-
}}
|
290 |
-
|
291 |
-
"""
|
292 |
|
293 |
print(f"generated prompt:\n{prompt}")
|
294 |
parameters = {
|
@@ -298,8 +253,13 @@ Fill in the following JSON output format:
|
|
298 |
'top_k': 40,
|
299 |
'top_p': 1,
|
300 |
'max_output_tokens': 4096,
|
301 |
-
|
302 |
-
|
|
|
|
|
|
|
|
|
|
|
303 |
|
304 |
story = response_json["paragraphs"]
|
305 |
if isinstance(story, list):
|
@@ -363,12 +323,12 @@ def image_gen(
|
|
363 |
for _ in range(3):
|
364 |
try:
|
365 |
prompt, neg_prompt = img_maker.generate_background_prompts(genre, place, mood, title, "", story_content)
|
366 |
-
neg_prompt
|
367 |
print(f"Image Prompt: {prompt}")
|
368 |
print(f"Negative Prompt: {neg_prompt}")
|
369 |
break
|
370 |
except Exception as e:
|
371 |
print(e)
|
|
|
372 |
|
373 |
if not prompt:
|
374 |
raise ValueError("Failed to generate prompts for background image.")
|
@@ -400,12 +360,13 @@ def audio_gen(
|
|
400 |
break
|
401 |
except Exception as e:
|
402 |
print(e)
|
|
|
403 |
|
404 |
if not prompt:
|
405 |
raise ValueError("Failed to generate prompt for background music.")
|
406 |
|
407 |
# generate music
|
408 |
-
bgm_filename = bgm_maker.text2music(prompt, length=
|
409 |
cursors[cur_cursor]["audio"] = bgm_filename
|
410 |
|
411 |
return (
|
|
|
6 |
from pathlib import Path
|
7 |
|
8 |
from modules import (
|
9 |
+
ImageMaker, MusicMaker, palmchat, palm_prompts, merge_video
|
10 |
)
|
11 |
from interfaces import utils
|
12 |
|
13 |
from pingpong import PingPong
|
14 |
from pingpong.context import CtxLastWindowStrategy
|
15 |
|
16 |
+
img_maker = ImageMaker('https://huggingface.co/jphan32/Zero2Story/landscapeAnimePro_v20Inspiration.safetensors',
|
17 |
+
vae="https://huggingface.co/jphan32/Zero2Story/cute20vae.safetensors")
|
18 |
+
|
|
|
|
|
19 |
bgm_maker = MusicMaker(model_size='small', output_format='mp3')
|
20 |
|
21 |
+
video_gen_client_url = None # e.g. "https://0447df3cf5f7c49c46.gradio.live"
|
22 |
+
|
23 |
+
# default safety settings
|
24 |
+
safety_settings = [{"category":"HARM_CATEGORY_DEROGATORY","threshold":1},
|
25 |
+
{"category":"HARM_CATEGORY_TOXICITY","threshold":1},
|
26 |
+
{"category":"HARM_CATEGORY_VIOLENCE","threshold":2},
|
27 |
+
{"category":"HARM_CATEGORY_SEXUAL","threshold":2},
|
28 |
+
{"category":"HARM_CATEGORY_MEDICAL","threshold":2},
|
29 |
+
{"category":"HARM_CATEGORY_DANGEROUS","threshold":2}]
|
30 |
|
31 |
async def update_story_gen(
|
32 |
cursors, cur_cursor_idx,
|
33 |
genre, place, mood,
|
34 |
+
main_char_name, main_char_age, main_char_personality, main_char_job,
|
35 |
+
side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
|
36 |
+
side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
|
37 |
+
side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
|
38 |
):
|
39 |
if len(cursors) == 1:
|
40 |
return await first_story_gen(
|
41 |
cursors,
|
42 |
genre, place, mood,
|
43 |
+
main_char_name, main_char_age, main_char_personality, main_char_job,
|
44 |
+
side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
|
45 |
+
side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
|
46 |
+
side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
|
47 |
cur_cursor_idx=cur_cursor_idx
|
48 |
)
|
49 |
else:
|
|
|
51 |
cursors,
|
52 |
None,
|
53 |
genre, place, mood,
|
54 |
+
main_char_name, main_char_age, main_char_personality, main_char_job,
|
55 |
+
side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
|
56 |
+
side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
|
57 |
+
side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
|
58 |
cur_cursor_idx=cur_cursor_idx
|
59 |
)
|
60 |
|
|
|
62 |
cursors,
|
63 |
action,
|
64 |
genre, place, mood,
|
65 |
+
main_char_name, main_char_age, main_char_personality, main_char_job,
|
66 |
+
side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
|
67 |
+
side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
|
68 |
+
side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
|
69 |
cur_cursor_idx=None
|
70 |
):
|
71 |
stories = ""
|
|
|
77 |
for cursor in cursors[:end_idx]:
|
78 |
stories = stories + cursor["story"]
|
79 |
|
80 |
+
side_char_prompt = utils.add_side_character(
|
81 |
+
[side_char_enable1, side_char_enable2, side_char_enable3],
|
82 |
+
[side_char_name1, side_char_name2, side_char_name3],
|
83 |
+
[side_char_job1, side_char_job2, side_char_job3],
|
84 |
+
[side_char_age1, side_char_age2, side_char_age3],
|
85 |
+
[side_char_personality1, side_char_personality2, side_char_personality3],
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
|
88 |
+
prompt = palm_prompts['story_gen']['next_story_gen'].format(
|
89 |
+
genre=genre, place=place, mood=mood,
|
90 |
+
main_char_name=main_char_name,
|
91 |
+
main_char_job=main_char_job,
|
92 |
+
main_char_age=main_char_age,
|
93 |
+
main_char_personality=main_char_personality,
|
94 |
+
side_char_placeholder=side_char_prompt,
|
95 |
+
stories=stories, action=action,
|
96 |
+
)
|
97 |
|
98 |
print(f"generated prompt:\n{prompt}")
|
99 |
parameters = {
|
|
|
103 |
'top_k': 40,
|
104 |
'top_p': 1,
|
105 |
'max_output_tokens': 4096,
|
106 |
+
'safety_settings': safety_settings,
|
107 |
+
}
|
108 |
+
try:
|
109 |
+
response_json = await utils.retry_until_valid_json(prompt, parameters=parameters)
|
110 |
+
except Exception as e:
|
111 |
+
print(e)
|
112 |
+
raise gr.Error(e)
|
113 |
|
114 |
story = response_json["paragraphs"]
|
115 |
if isinstance(story, list):
|
|
|
143 |
async def actions_gen(
|
144 |
cursors,
|
145 |
genre, place, mood,
|
146 |
+
main_char_name, main_char_age, main_char_personality, main_char_job,
|
147 |
+
side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
|
148 |
+
side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
|
149 |
+
side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
|
150 |
cur_cursor_idx=None
|
151 |
):
|
152 |
stories = ""
|
|
|
156 |
for cursor in cursors[:end_idx]:
|
157 |
stories = stories + cursor["story"]
|
158 |
|
159 |
+
summary_prompt = palm_prompts['story_gen']['summarize'].format(stories=stories)
|
|
|
|
|
160 |
|
|
|
161 |
print(f"generated prompt:\n{summary_prompt}")
|
162 |
parameters = {
|
163 |
'model': 'models/text-bison-001',
|
|
|
166 |
'top_k': 40,
|
167 |
'top_p': 1,
|
168 |
'max_output_tokens': 4096,
|
169 |
+
'safety_settings': safety_settings,
|
170 |
+
}
|
171 |
+
|
172 |
+
try:
|
173 |
+
_, summary = await palmchat.gen_text(summary_prompt, mode="text", parameters=parameters)
|
174 |
+
except Exception as e:
|
175 |
+
print(e)
|
176 |
+
raise gr.Error(e)
|
177 |
+
|
178 |
+
side_char_prompt = utils.add_side_character(
|
179 |
+
[side_char_enable1, side_char_enable2, side_char_enable3],
|
180 |
+
[side_char_name1, side_char_name2, side_char_name3],
|
181 |
+
[side_char_job1, side_char_job2, side_char_job3],
|
182 |
+
[side_char_age1, side_char_age2, side_char_age3],
|
183 |
+
[side_char_personality1, side_char_personality2, side_char_personality3],
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
184 |
)
|
185 |
+
prompt = palm_prompts['story_gen']['actions_gen'].format(
|
186 |
+
genre=genre, place=place, mood=mood,
|
187 |
+
main_char_name=main_char_name,
|
188 |
+
main_char_job=main_char_job,
|
189 |
+
main_char_age=main_char_age,
|
190 |
+
main_char_personality=main_char_personality,
|
191 |
+
side_char_placeholder=side_char_prompt,
|
192 |
+
summary=summary,
|
193 |
)
|
194 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
195 |
print(f"generated prompt:\n{prompt}")
|
196 |
parameters = {
|
197 |
'model': 'models/text-bison-001',
|
|
|
200 |
'top_k': 40,
|
201 |
'top_p': 1,
|
202 |
'max_output_tokens': 4096,
|
203 |
+
'safety_settings': safety_settings,
|
204 |
+
}
|
205 |
+
try:
|
206 |
+
response_json = await utils.retry_until_valid_json(prompt, parameters=parameters)
|
207 |
+
except Exception as e:
|
208 |
+
print(e)
|
209 |
+
raise gr.Error(e)
|
210 |
actions = response_json["options"]
|
211 |
|
212 |
random_actions = random.sample(actions, 3)
|
|
|
221 |
async def first_story_gen(
|
222 |
cursors,
|
223 |
genre, place, mood,
|
224 |
+
main_char_name, main_char_age, main_char_personality, main_char_job,
|
225 |
+
side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
|
226 |
+
side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
|
227 |
+
side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
|
228 |
cur_cursor_idx=None
|
229 |
):
|
230 |
cur_side_chars = 1
|
231 |
|
232 |
+
side_char_prompt = utils.add_side_character(
|
233 |
+
[side_char_enable1, side_char_enable2, side_char_enable3],
|
234 |
+
[side_char_name1, side_char_name2, side_char_name3],
|
235 |
+
[side_char_job1, side_char_job2, side_char_job3],
|
236 |
+
[side_char_age1, side_char_age2, side_char_age3],
|
237 |
+
[side_char_personality1, side_char_personality2, side_char_personality3],
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
238 |
)
|
239 |
+
prompt = palm_prompts['story_gen']['first_story_gen'].format(
|
240 |
+
genre=genre, place=place, mood=mood,
|
241 |
+
main_char_name=main_char_name,
|
242 |
+
main_char_job=main_char_job,
|
243 |
+
main_char_age=main_char_age,
|
244 |
+
main_char_personality=main_char_personality,
|
245 |
+
side_char_placeholder=side_char_prompt,
|
246 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
247 |
|
248 |
print(f"generated prompt:\n{prompt}")
|
249 |
parameters = {
|
|
|
253 |
'top_k': 40,
|
254 |
'top_p': 1,
|
255 |
'max_output_tokens': 4096,
|
256 |
+
'safety_settings': safety_settings,
|
257 |
+
}
|
258 |
+
try:
|
259 |
+
response_json = await utils.retry_until_valid_json(prompt, parameters=parameters)
|
260 |
+
except Exception as e:
|
261 |
+
print(e)
|
262 |
+
raise gr.Error(e)
|
263 |
|
264 |
story = response_json["paragraphs"]
|
265 |
if isinstance(story, list):
|
|
|
323 |
for _ in range(3):
|
324 |
try:
|
325 |
prompt, neg_prompt = img_maker.generate_background_prompts(genre, place, mood, title, "", story_content)
|
|
|
326 |
print(f"Image Prompt: {prompt}")
|
327 |
print(f"Negative Prompt: {neg_prompt}")
|
328 |
break
|
329 |
except Exception as e:
|
330 |
print(e)
|
331 |
+
raise gr.Error(e)
|
332 |
|
333 |
if not prompt:
|
334 |
raise ValueError("Failed to generate prompts for background image.")
|
|
|
360 |
break
|
361 |
except Exception as e:
|
362 |
print(e)
|
363 |
+
raise gr.Error(e)
|
364 |
|
365 |
if not prompt:
|
366 |
raise ValueError("Failed to generate prompt for background music.")
|
367 |
|
368 |
# generate music
|
369 |
+
bgm_filename = bgm_maker.text2music(prompt, length=60)
|
370 |
cursors[cur_cursor]["audio"] = bgm_filename
|
371 |
|
372 |
return (
|
interfaces/ui.py
CHANGED
@@ -16,39 +16,64 @@ from modules import (
|
|
16 |
|
17 |
from interfaces import utils
|
18 |
|
19 |
-
|
20 |
-
|
21 |
-
img_maker = ImageMaker('hellonijicute25d_V10b.safetensors') # without_VAE
|
22 |
|
23 |
############
|
24 |
# for plotting
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
def update_selected_char_image(evt: gr.EventData):
|
27 |
return evt.value
|
28 |
|
29 |
-
def get_random_name(cur_char_name, char_name1, char_name2, char_name3):
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
|
37 |
|
38 |
def gen_character_image(
|
39 |
gallery_images,
|
40 |
-
name, age,
|
41 |
genre, place, mood, creative_mode
|
42 |
):
|
43 |
# generate prompts for character image with PaLM
|
44 |
for _ in range(3):
|
45 |
try:
|
46 |
-
prompt, neg_prompt = img_maker.generate_character_prompts(name, age, job, keywords=[
|
47 |
print(f"Image Prompt: {prompt}")
|
48 |
print(f"Negative Prompt: {neg_prompt}")
|
49 |
break
|
50 |
except Exception as e:
|
51 |
print(e)
|
|
|
52 |
|
53 |
if not prompt:
|
54 |
raise ValueError("Failed to generate prompts for character image.")
|
@@ -157,4 +182,4 @@ def reset():
|
|
157 |
'Your Own Story', # title_txt
|
158 |
|
159 |
"", # export_html
|
160 |
-
)
|
|
|
16 |
|
17 |
from interfaces import utils
|
18 |
|
19 |
+
img_maker = ImageMaker('https://huggingface.co/jphan32/Zero2Story/hellonijicute25d_V10b.safetensors',
|
20 |
+
vae="https://huggingface.co/jphan32/Zero2Story/klF8Anime2Fp16.safetensors")
|
|
|
21 |
|
22 |
############
|
23 |
# for plotting
|
24 |
|
25 |
+
get_random_name_js = f"""
|
26 |
+
function get_random_name(cur_char_name, char_name1, char_name2, char_name3) {{
|
27 |
+
console.log("hello world");
|
28 |
+
|
29 |
+
const names = {random_names};
|
30 |
+
const names_copy = JSON.parse(JSON.stringify(names));
|
31 |
+
|
32 |
+
console.log(names);
|
33 |
+
|
34 |
+
let index = names_copy.indexOf(cur_char_name);
|
35 |
+
names_copy.splice(index, 1);
|
36 |
+
|
37 |
+
index = names_copy.indexOf(char_name1);
|
38 |
+
names_copy.splice(index, 1)
|
39 |
+
|
40 |
+
index = names_copy.indexOf(char_name2);
|
41 |
+
names_copy.splice(index, 1);
|
42 |
+
|
43 |
+
index = names_copy.indexOf(char_name3);
|
44 |
+
names_copy.splice(index, 1);
|
45 |
+
|
46 |
+
return names_copy[(Math.floor(Math.random() * names_copy.length))];
|
47 |
+
}}
|
48 |
+
"""
|
49 |
+
|
50 |
def update_selected_char_image(evt: gr.EventData):
|
51 |
return evt.value
|
52 |
|
53 |
+
# def get_random_name(cur_char_name, char_name1, char_name2, char_name3):
|
54 |
+
# tmp_random_names = copy.deepcopy(random_names)
|
55 |
+
# tmp_random_names.remove(cur_char_name)
|
56 |
+
# tmp_random_names.remove(char_name1)
|
57 |
+
# tmp_random_names.remove(char_name2)
|
58 |
+
# tmp_random_names.remove(char_name3)
|
59 |
+
# return random.choice(tmp_random_names)
|
60 |
|
61 |
|
62 |
def gen_character_image(
|
63 |
gallery_images,
|
64 |
+
name, age, personality, job,
|
65 |
genre, place, mood, creative_mode
|
66 |
):
|
67 |
# generate prompts for character image with PaLM
|
68 |
for _ in range(3):
|
69 |
try:
|
70 |
+
prompt, neg_prompt = img_maker.generate_character_prompts(name, age, job, keywords=[personality, genre, place, mood], creative_mode=creative_mode)
|
71 |
print(f"Image Prompt: {prompt}")
|
72 |
print(f"Negative Prompt: {neg_prompt}")
|
73 |
break
|
74 |
except Exception as e:
|
75 |
print(e)
|
76 |
+
raise gr.Error(e)
|
77 |
|
78 |
if not prompt:
|
79 |
raise ValueError("Failed to generate prompts for character image.")
|
|
|
182 |
'Your Own Story', # title_txt
|
183 |
|
184 |
"", # export_html
|
185 |
+
)
|
interfaces/utils.py
CHANGED
@@ -3,12 +3,15 @@ import json
|
|
3 |
import string
|
4 |
import random
|
5 |
|
6 |
-
from modules import
|
|
|
|
|
|
|
7 |
from pingpong.context import CtxLastWindowStrategy
|
8 |
|
9 |
def add_side_character_to_export(
|
10 |
characters, enable, img,
|
11 |
-
name, age,
|
12 |
):
|
13 |
if enable:
|
14 |
characters.append(
|
@@ -20,23 +23,20 @@ def add_side_character_to_export(
|
|
20 |
|
21 |
return characters
|
22 |
|
23 |
-
def add_side_character(
|
24 |
-
|
25 |
-
|
26 |
-
):
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
"""
|
37 |
-
cur_side_chars = cur_side_chars + 1
|
38 |
-
|
39 |
-
return prompt, cur_side_chars
|
40 |
|
41 |
def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
|
42 |
return ''.join(random.choice(chars) for _ in range(size))
|
@@ -60,14 +60,28 @@ def parse_first_json_code_snippet(code_snippet):
|
|
60 |
|
61 |
async def retry_until_valid_json(prompt, parameters=None):
|
62 |
response_json = None
|
63 |
-
while response_json is None:
|
64 |
-
_, response_txt = await palmchat.gen_text(prompt, mode="text", parameters=parameters)
|
65 |
-
print(response_txt)
|
66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
try:
|
68 |
response_json = parse_first_json_code_snippet(response_txt)
|
|
|
69 |
except:
|
|
|
70 |
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
return response_json
|
73 |
|
|
|
3 |
import string
|
4 |
import random
|
5 |
|
6 |
+
from modules import (
|
7 |
+
palmchat, palm_prompts,
|
8 |
+
)
|
9 |
+
|
10 |
from pingpong.context import CtxLastWindowStrategy
|
11 |
|
12 |
def add_side_character_to_export(
|
13 |
characters, enable, img,
|
14 |
+
name, age, personality, job
|
15 |
):
|
16 |
if enable:
|
17 |
characters.append(
|
|
|
23 |
|
24 |
return characters
|
25 |
|
26 |
+
def add_side_character(enable, name, age, personality, job):
|
27 |
+
cur_side_chars = 1
|
28 |
+
prompt = ""
|
29 |
+
for idx in range(len(enable)):
|
30 |
+
if enable[idx]:
|
31 |
+
prompt += palm_prompts['story_gen']['add_side_character'].format(
|
32 |
+
cur_side_chars=cur_side_chars,
|
33 |
+
name=name[idx],
|
34 |
+
job=job[idx],
|
35 |
+
age=age[idx],
|
36 |
+
personality=personality[idx]
|
37 |
+
)
|
38 |
+
cur_side_chars += 1
|
39 |
+
return "\n" + prompt if prompt else ""
|
|
|
|
|
|
|
40 |
|
41 |
def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
|
42 |
return ''.join(random.choice(chars) for _ in range(size))
|
|
|
60 |
|
61 |
async def retry_until_valid_json(prompt, parameters=None):
|
62 |
response_json = None
|
|
|
|
|
|
|
63 |
|
64 |
+
for _ in range(3):
|
65 |
+
try:
|
66 |
+
response, response_txt = await palmchat.gen_text(prompt, mode="text", parameters=parameters)
|
67 |
+
print(response_txt)
|
68 |
+
except Exception as e:
|
69 |
+
print("PaLM API has withheld a response due to content safety concerns. Retrying...")
|
70 |
+
continue
|
71 |
+
|
72 |
try:
|
73 |
response_json = parse_first_json_code_snippet(response_txt)
|
74 |
+
break
|
75 |
except:
|
76 |
+
print("Parsing JSON failed. Retrying...")
|
77 |
pass
|
78 |
+
|
79 |
+
if len(response.filters) > 0:
|
80 |
+
raise ValueError("PaLM API has withheld a response due to content safety concerns.")
|
81 |
+
elif response_json is None:
|
82 |
+
print("=== Failed to generate valid JSON response. ===")
|
83 |
+
print(response_txt)
|
84 |
+
raise ValueError("Failed to generate valid JSON response.")
|
85 |
|
86 |
return response_json
|
87 |
|
interfaces/view_change_ui.py
CHANGED
@@ -10,4 +10,94 @@ def back_to_previous_view():
|
|
10 |
return (
|
11 |
gr.update(visible=True),
|
12 |
gr.update(visible=False),
|
13 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
return (
|
11 |
gr.update(visible=True),
|
12 |
gr.update(visible=False),
|
13 |
+
)
|
14 |
+
|
15 |
+
# pre_phase: False, background_setup_phase: True
|
16 |
+
pre_to_setup_js = """
|
17 |
+
function pre_to_setup() {
|
18 |
+
console.log("hello");
|
19 |
+
|
20 |
+
document.querySelector("#pre_phase").style.display = "none";
|
21 |
+
document.querySelector("#background_setup_phase").style.display = "flex";
|
22 |
+
}
|
23 |
+
"""
|
24 |
+
|
25 |
+
# pre_phase: True, background_setup_phase: False
|
26 |
+
back_to_pre_js = """
|
27 |
+
function back_to_pre() {
|
28 |
+
document.querySelector("#pre_phase").style.display = "flex";
|
29 |
+
document.querySelector("#background_setup_phase").style.display = "none";
|
30 |
+
}
|
31 |
+
"""
|
32 |
+
|
33 |
+
# background_setup_phase: False, character_setup_phase: True
|
34 |
+
world_setup_confirm_js = """
|
35 |
+
function world_setup_confirm() {
|
36 |
+
document.querySelector("#background_setup_phase").style.display = "none";
|
37 |
+
document.querySelector("#character_setup_phase").style.display = "flex";
|
38 |
+
}
|
39 |
+
"""
|
40 |
+
|
41 |
+
# background_setup_phase: True, character_setup_phase: False
|
42 |
+
back_to_background_setup_js = """
|
43 |
+
function back_to_background_setup() {
|
44 |
+
document.querySelector("#background_setup_phase").style.display = "flex";
|
45 |
+
document.querySelector("#character_setup_phase").style.display = "none";
|
46 |
+
}
|
47 |
+
"""
|
48 |
+
|
49 |
+
# pre_phase: False, writing_phase: True
|
50 |
+
restart_from_story_generation_js = """
|
51 |
+
function restart_from_story_generation() {
|
52 |
+
document.querySelector("#pre_phase").style.display = "none";
|
53 |
+
document.querySelector("#writing_phase").style.display = "flex";
|
54 |
+
}
|
55 |
+
"""
|
56 |
+
|
57 |
+
# writing_phase: False, export_phase: True
|
58 |
+
story_writing_done_js = """
|
59 |
+
function story_writing_done() {
|
60 |
+
document.querySelector("#writing_phase").style.display = "none";
|
61 |
+
document.querySelector("#export_phase").style.display = "flex";
|
62 |
+
}
|
63 |
+
"""
|
64 |
+
|
65 |
+
# export_phase: False, export_view_phase: True
|
66 |
+
export_done_js = """
|
67 |
+
function export_done() {
|
68 |
+
document.querySelector("#export_phase").style.display = "none";
|
69 |
+
document.querySelector("#export_view_phase").style.display = "flex";
|
70 |
+
}
|
71 |
+
"""
|
72 |
+
|
73 |
+
# writing_phase: True, export_phase: False
|
74 |
+
back_to_story_writing_js = """
|
75 |
+
function back_to_story_writing() {
|
76 |
+
document.querySelector("#writing_phase").style.display = "flex";
|
77 |
+
document.querySelector("#export_phase").style.display = "none";
|
78 |
+
}
|
79 |
+
"""
|
80 |
+
|
81 |
+
# pre_phase: True, export_phase: False
|
82 |
+
restart_from_export_js = """
|
83 |
+
function restart_from_export() {
|
84 |
+
document.querySelector("#pre_phase").style.display = "flex";
|
85 |
+
document.querySelector("#export_phase").style.display = "none";
|
86 |
+
}
|
87 |
+
"""
|
88 |
+
|
89 |
+
# character_setup_phase: False, writing_phase: True
|
90 |
+
character_setup_confirm_js = """
|
91 |
+
function character_setup_confirm() {
|
92 |
+
document.querySelector("#character_setup_phase").style.display = "none";
|
93 |
+
document.querySelector("#writing_phase").style.display = "flex";
|
94 |
+
}
|
95 |
+
"""
|
96 |
+
|
97 |
+
# pre_phase: True, export_view_phase: False
|
98 |
+
restart_from_export_view_js = """
|
99 |
+
function restart_from_export_view() {
|
100 |
+
document.querySelector("#pre_phase").style.display = "flex";
|
101 |
+
document.querySelector("#export_view_phase").style.display = "none";
|
102 |
+
}
|
103 |
+
"""
|
modules/__init__.py
CHANGED
@@ -4,6 +4,7 @@ from .palmchat import (
|
|
4 |
PaLMChatPromptFmt,
|
5 |
PaLMChatPPManager,
|
6 |
GradioPaLMChatPPManager,
|
|
|
7 |
)
|
8 |
from .utils import (
|
9 |
merge_video,
|
|
|
4 |
PaLMChatPromptFmt,
|
5 |
PaLMChatPPManager,
|
6 |
GradioPaLMChatPPManager,
|
7 |
+
palm_prompts,
|
8 |
)
|
9 |
from .utils import (
|
10 |
merge_video,
|
modules/image_maker.py
CHANGED
@@ -49,7 +49,8 @@ class ImageMaker:
|
|
49 |
sampling: Literal['sde-dpmsolver++'] = 'sde-dpmsolver++',
|
50 |
vae: str = None,
|
51 |
safety: bool = True,
|
52 |
-
|
|
|
53 |
device: str = None) -> None:
|
54 |
"""Initialize the ImageMaker class.
|
55 |
|
@@ -59,6 +60,8 @@ class ImageMaker:
|
|
59 |
sampling (Literal['sde-dpmsolver++'], optional): Sampling method. Defaults to 'sde-dpmsolver++'.
|
60 |
vae (str, optional): Filename of the VAE model. Defaults to None.
|
61 |
safety (bool, optional): Whether to use the safety checker. Defaults to True.
|
|
|
|
|
62 |
device (str, optional): Device to use for the model. Defaults to None.
|
63 |
"""
|
64 |
|
@@ -68,34 +71,41 @@ class ImageMaker:
|
|
68 |
self.__sampling = sampling
|
69 |
self.__vae = vae
|
70 |
self.__safety = safety
|
71 |
-
self.
|
|
|
72 |
|
73 |
print("Loading the Stable Diffusion model into memory...")
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
self.
|
97 |
-
|
|
|
98 |
|
|
|
|
|
|
|
|
|
|
|
99 |
print(f"Loaded model to {self.device}")
|
100 |
self.__sd_model = self.__sd_model.to(self.device)
|
101 |
|
@@ -189,6 +199,8 @@ class ImageMaker:
|
|
189 |
)
|
190 |
except asyncio.TimeoutError:
|
191 |
raise TimeoutError("The response time for PaLM API exceeded the limit.")
|
|
|
|
|
192 |
|
193 |
try:
|
194 |
res_json = json.loads(response_txt)
|
@@ -241,6 +253,8 @@ class ImageMaker:
|
|
241 |
)
|
242 |
except asyncio.TimeoutError:
|
243 |
raise TimeoutError("The response time for PaLM API exceeded the limit.")
|
|
|
|
|
244 |
|
245 |
try:
|
246 |
res_json = json.loads(response_txt)
|
@@ -272,6 +286,10 @@ class ImageMaker:
|
|
272 |
return self.__compel_proc.pad_conditioning_tensors_to_same_length([conditioning, negative_conditioning])
|
273 |
|
274 |
|
|
|
|
|
|
|
|
|
275 |
@property
|
276 |
def model_base(self):
|
277 |
"""Model base
|
|
|
49 |
sampling: Literal['sde-dpmsolver++'] = 'sde-dpmsolver++',
|
50 |
vae: str = None,
|
51 |
safety: bool = True,
|
52 |
+
variant: str = None,
|
53 |
+
from_hf: bool = False,
|
54 |
device: str = None) -> None:
|
55 |
"""Initialize the ImageMaker class.
|
56 |
|
|
|
60 |
sampling (Literal['sde-dpmsolver++'], optional): Sampling method. Defaults to 'sde-dpmsolver++'.
|
61 |
vae (str, optional): Filename of the VAE model. Defaults to None.
|
62 |
safety (bool, optional): Whether to use the safety checker. Defaults to True.
|
63 |
+
variant (str, optional): Variant of the model. Defaults to None.
|
64 |
+
from_hf (bool, optional): Whether to load the model from HuggingFace. Defaults to False.
|
65 |
device (str, optional): Device to use for the model. Defaults to None.
|
66 |
"""
|
67 |
|
|
|
71 |
self.__sampling = sampling
|
72 |
self.__vae = vae
|
73 |
self.__safety = safety
|
74 |
+
self.__variant = variant
|
75 |
+
self.__from_hf = from_hf
|
76 |
|
77 |
print("Loading the Stable Diffusion model into memory...")
|
78 |
+
if not self.__from_hf:
|
79 |
+
# from file
|
80 |
+
self.__sd_model = StableDiffusionPipeline.from_single_file(self.model_base,
|
81 |
+
torch_dtype=torch.float16,
|
82 |
+
use_safetensors=True,
|
83 |
+
)
|
84 |
+
|
85 |
+
# Clip Skip
|
86 |
+
self.__sd_model.text_encoder.text_model.encoder.layers = self.__sd_model.text_encoder.text_model.encoder.layers[:12 - (self.clip_skip - 1)]
|
87 |
+
|
88 |
+
# Sampling method
|
89 |
+
if True: # TODO: Sampling method :: self.sampling == 'sde-dpmsolver++'
|
90 |
+
scheduler = DPMSolverMultistepScheduler.from_config(self.__sd_model.scheduler.config)
|
91 |
+
scheduler.config.algorithm_type = 'sde-dpmsolver++'
|
92 |
+
self.__sd_model.scheduler = scheduler
|
93 |
+
|
94 |
+
# VAE
|
95 |
+
if self.vae:
|
96 |
+
vae_model = AutoencoderKL.from_single_file(self.vae, use_safetensors=True)
|
97 |
+
self.__sd_model.vae = vae_model.to(dtype=torch.float16)
|
98 |
+
|
99 |
+
# Safety checker
|
100 |
+
if not self.safety:
|
101 |
+
self.__sd_model.safety_checker = None
|
102 |
+
self.__sd_model.requires_safety_checker = False
|
103 |
|
104 |
+
else:
|
105 |
+
# from huggingface
|
106 |
+
self.__sd_model = StableDiffusionPipeline.from_pretrained(self.model_base,
|
107 |
+
variant=self.__variant,
|
108 |
+
use_safetensors=True)
|
109 |
print(f"Loaded model to {self.device}")
|
110 |
self.__sd_model = self.__sd_model.to(self.device)
|
111 |
|
|
|
199 |
)
|
200 |
except asyncio.TimeoutError:
|
201 |
raise TimeoutError("The response time for PaLM API exceeded the limit.")
|
202 |
+
except:
|
203 |
+
raise Exception("PaLM API is not available.")
|
204 |
|
205 |
try:
|
206 |
res_json = json.loads(response_txt)
|
|
|
253 |
)
|
254 |
except asyncio.TimeoutError:
|
255 |
raise TimeoutError("The response time for PaLM API exceeded the limit.")
|
256 |
+
except:
|
257 |
+
raise Exception("PaLM API is not available.")
|
258 |
|
259 |
try:
|
260 |
res_json = json.loads(response_txt)
|
|
|
286 |
return self.__compel_proc.pad_conditioning_tensors_to_same_length([conditioning, negative_conditioning])
|
287 |
|
288 |
|
289 |
+
def push_to_hub(self, repo_id:str, commit_message:str=None, token:str=None, variant:str=None):
|
290 |
+
self.__sd_model.push_to_hub(repo_id, commit_message=commit_message, token=token, variant=variant)
|
291 |
+
|
292 |
+
|
293 |
@property
|
294 |
def model_base(self):
|
295 |
"""Model base
|
modules/music_maker.py
CHANGED
@@ -124,6 +124,8 @@ class MusicMaker:
|
|
124 |
)
|
125 |
except asyncio.TimeoutError:
|
126 |
raise TimeoutError("The response time for PaLM API exceeded the limit.")
|
|
|
|
|
127 |
|
128 |
try:
|
129 |
res_json = json.loads(response_txt)
|
|
|
124 |
)
|
125 |
except asyncio.TimeoutError:
|
126 |
raise TimeoutError("The response time for PaLM API exceeded the limit.")
|
127 |
+
except:
|
128 |
+
raise Exception("PaLM API is not available.")
|
129 |
|
130 |
try:
|
131 |
res_json = json.loads(response_txt)
|
modules/palmchat.py
CHANGED
@@ -104,6 +104,7 @@ async def gen_text(
|
|
104 |
'temperature': temperature,
|
105 |
'top_k': top_k,
|
106 |
'top_p': top_p,
|
|
|
107 |
}
|
108 |
else:
|
109 |
parameters = {
|
@@ -116,14 +117,16 @@ async def gen_text(
|
|
116 |
'safety_settings': safety_settings,
|
117 |
}
|
118 |
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
|
|
|
|
127 |
else:
|
128 |
if mode == "chat":
|
129 |
response_txt = response.last
|
|
|
104 |
'temperature': temperature,
|
105 |
'top_k': top_k,
|
106 |
'top_p': top_p,
|
107 |
+
'safety_settings': safety_settings,
|
108 |
}
|
109 |
else:
|
110 |
parameters = {
|
|
|
117 |
'safety_settings': safety_settings,
|
118 |
}
|
119 |
|
120 |
+
try:
|
121 |
+
if mode == "chat":
|
122 |
+
response = await palm_api.chat_async(**parameters, messages=prompt)
|
123 |
+
else:
|
124 |
+
response = palm_api.generate_text(**parameters, prompt=prompt)
|
125 |
+
except:
|
126 |
+
raise EnvironmentError("PaLM API is not available.")
|
127 |
+
|
128 |
+
if use_filter and len(response.filters) > 0:
|
129 |
+
raise Exception("PaLM API has withheld a response due to content safety concerns.")
|
130 |
else:
|
131 |
if mode == "chat":
|
132 |
response_txt = response.last
|