qingxu98 commited on
Commit
8dd4d48
1 Parent(s): 15f14f5
Files changed (43) hide show
  1. README.md +72 -33
  2. app.py +47 -78
  3. check_proxy.py +8 -0
  4. config.py +24 -7
  5. crazy_functional.py +34 -10
  6. crazy_functions/Latex全文润色.py +2 -2
  7. crazy_functions/Latex全文翻译.py +2 -2
  8. crazy_functions/Latex输出PDF结果.py +3 -0
  9. crazy_functions/crazy_utils.py +3 -174
  10. crazy_functions/latex_fns/latex_actions.py +6 -5
  11. crazy_functions/latex_fns/latex_toolbox.py +33 -4
  12. crazy_functions/multi_stage/multi_stage_utils.py +56 -8
  13. crazy_functions/pdf_fns/parse_pdf.py +2 -2
  14. crazy_functions/图片生成.py +105 -33
  15. crazy_functions/总结word文档.py +2 -7
  16. crazy_functions/批量Markdown翻译.py +2 -2
  17. crazy_functions/批量总结PDF文档.py +3 -8
  18. crazy_functions/批量翻译PDF文档_多线程.py +3 -8
  19. crazy_functions/理解PDF文档内容.py +4 -9
  20. crazy_functions/解析JupyterNotebook.py +2 -10
  21. docs/translate_english.json +110 -9
  22. docs/translate_traditionalchinese.json +3 -3
  23. multi_language.py +11 -11
  24. request_llms/bridge_all.py +36 -4
  25. request_llms/bridge_chatgpt.py +14 -8
  26. request_llms/bridge_chatgpt_vision.py +3 -20
  27. request_llms/bridge_deepseekcoder.py +44 -3
  28. request_llms/bridge_qwen.py +61 -66
  29. request_llms/bridge_spark.py +2 -2
  30. request_llms/com_sparkapi.py +29 -12
  31. request_llms/local_llm_class.py +2 -2
  32. request_llms/requirements_chatglm_onnx.txt +0 -2
  33. request_llms/requirements_moss.txt +0 -1
  34. request_llms/requirements_qwen.txt +1 -2
  35. requirements.txt +1 -0
  36. tests/test_llms.py +3 -2
  37. tests/test_plugins.py +3 -3
  38. tests/test_utils.py +6 -3
  39. themes/common.js +387 -49
  40. themes/green.css +2 -2
  41. themes/theme.py +98 -3
  42. toolbox.py +122 -17
  43. version +2 -2
README.md CHANGED
@@ -14,41 +14,69 @@ pinned: false
14
  >
15
  > 2023.11.12: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
16
  >
17
- > 2023.11.7: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目开源免费,近期发现有人蔑视开源协议并利用本项目违规圈钱,请提高警惕,谨防上当受骗。
18
 
 
19
 
 
 
 
 
20
 
21
- # <div align=center><img src="docs/logo.png" width="40"> GPT 学术优化 (GPT Academic)</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  **如果喜欢这个项目,请给它一个Star;如果您发明了好用的快捷键或插件,欢迎发pull requests!**
24
 
25
- If you like this project, please give it a Star. We also have a README in [English|](docs/README.English.md)[日本語|](docs/README.Japanese.md)[한국어|](docs/README.Korean.md)[Русский|](docs/README.Russian.md)[Français](docs/README.French.md) translated by this project itself.
26
- To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
 
 
27
 
28
- > **Note**
29
- >
30
  > 1.请注意只有 **高亮** 标识的插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
31
  >
32
- > 2.本项目中每个文件的功能都在[自译解报告`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题[`wiki`](https://github.com/binary-husky/gpt_academic/wiki)[常规安装方法](#installation) | [一键安装脚本](https://github.com/binary-husky/gpt_academic/releases) | [配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
 
33
  >
34
- > 3.本项目兼容��鼓励尝试国产大语言模型ChatGLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
35
-
36
 
37
-
38
 
39
  <div align="center">
40
 
41
  功能(⭐= 近期新增功能) | 描述
42
  --- | ---
43
- ⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱API](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
44
  润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
45
  [自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
46
  模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
47
- [程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [插件] 一键可以剖析Python/C/C++/Java/Lua/...项目树 或 [自我剖析](https://www.bilibili.com/video/BV1cj411A7VW)
48
  读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要
49
  Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
50
  批量注释生成 | [插件] 一键批量生成函数注释
51
- Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?
52
  chat分析报告生成 | [插件] 运行后自动生成总结汇报
53
  [PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
54
  [Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
@@ -60,22 +88,22 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
60
  公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
61
  ⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
62
  启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
63
- [多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧?
64
  ⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
65
  更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
66
  ⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
67
- ⭐虚空终端插件 | [插件] 用自然语言,直接调度本项目其他插件
68
  更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
69
  </div>
70
 
71
 
72
  - 新界面(修改`config.py`中的LAYOUT选项即可实现“左右布局”和“上下布局”的切换)
73
  <div align="center">
74
- <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/d81137c3-affd-4cd1-bb5e-b15610389762" width="700" >
75
  </div>
76
 
77
 
78
- - 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板
79
  <div align="center">
80
  <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
81
  </div>
@@ -85,21 +113,23 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
85
  <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
86
  </div>
87
 
88
- - 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读
89
  <div align="center">
90
  <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
91
  </div>
92
 
93
- - 懒得看项目代码?整个工程直接给chatgpt炫嘴里
94
  <div align="center">
95
  <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
96
  </div>
97
 
98
- - 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
99
  <div align="center">
100
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
101
  </div>
102
 
 
 
103
  # Installation
104
  ### 安装方法I:直接运行 (Windows, Linux or MacOS)
105
 
@@ -110,13 +140,13 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
110
  cd gpt_academic
111
  ```
112
 
113
- 2. 配置API_KEY
114
 
115
- 在`config.py`中,配置API KEY等设置,[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
116
 
117
- 「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解该读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中(仅复制您修改过的配置条目即可)。
118
 
119
- 「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py`。
120
 
121
 
122
  3. 安装依赖
@@ -149,6 +179,14 @@ git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llms/moss #
149
 
150
  # 【可选步骤IV】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
151
  AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
 
 
 
 
 
 
 
 
152
  ```
153
 
154
  </p>
@@ -163,7 +201,7 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-
163
 
164
  ### 安装方法II:使用Docker
165
 
166
- 0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小,则不推荐使用这个)
167
  [![fullcapacity](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml)
168
 
169
  ``` sh
@@ -192,26 +230,26 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
192
  ```
193
 
194
 
195
- ### 安装方法III:其他部署姿势
196
  1. **Windows一键运行脚本**。
197
- 完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。
198
- 脚本的贡献来源是[oobabooga](https://github.com/oobabooga/one-click-installers)。
199
 
200
  2. 使用第三方API、Azure等、文心一言、星火等,见[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)
201
 
202
  3. 云服务器远程部署避坑指南。
203
  请访问[云服务器远程部署wiki](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
204
 
205
- 4. 一些新型的部署平台或方法
206
  - 使用Sealos[一键部署](https://github.com/binary-husky/gpt_academic/issues/993)。
207
  - 使用WSL2(Windows Subsystem for Linux 子系统)。请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
208
  - 如何在二级网址(如`http://localhost/subpath`)下运行。请访问[FastAPI运行说明](docs/WithFastapi.md)
209
 
 
210
 
211
  # Advanced Usage
212
  ### I:自定义新的便捷按钮(学术快捷键)
213
 
214
- 任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序。(如按钮已存在,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
215
  例如
216
 
217
  ```python
@@ -233,6 +271,7 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
233
  本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
234
  详情请参考[函数插件指南](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
235
 
 
236
 
237
  # Updates
238
  ### I:动态
@@ -332,7 +371,7 @@ GPT Academic开发者QQ群:`610599535`
332
 
333
  - 已知问题
334
  - 某些浏览器翻译插件干扰此软件前端的运行
335
- - 官方Gradio目前有很多兼容性Bug,请务必使用`requirement.txt`安装Gradio
336
 
337
  ### III:主题
338
  可以通过修改`THEME`选项(config.py)变更主题
@@ -343,8 +382,8 @@ GPT Academic开发者QQ群:`610599535`
343
 
344
  1. `master` 分支: 主分支,稳定版
345
  2. `frontier` 分支: 开发分支,测试版
346
- 3. 如何接入其他大模型:[接入其他大模型](request_llms/README.md)
347
-
348
 
349
  ### V:参考与学习
350
 
 
14
  >
15
  > 2023.11.12: 某些依赖包尚不兼容python 3.12,推荐python 3.11。
16
  >
17
+ > 2023.12.26: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
18
 
19
+ <br>
20
 
21
+ <div align=center>
22
+ <h1 aligh="center">
23
+ <img src="docs/logo.png" width="40"> GPT 学术优化 (GPT Academic)
24
+ </h1>
25
 
26
+ [![Github][Github-image]][Github-url]
27
+ [![License][License-image]][License-url]
28
+ [![Releases][Releases-image]][Releases-url]
29
+ [![Installation][Installation-image]][Installation-url]
30
+ [![Wiki][Wiki-image]][Wiki-url]
31
+ [![PR][PRs-image]][PRs-url]
32
+
33
+ [Github-image]: https://img.shields.io/badge/github-12100E.svg?style=flat-square
34
+ [License-image]: https://img.shields.io/github/license/binary-husky/gpt_academic?label=License&style=flat-square&color=orange
35
+ [Releases-image]: https://img.shields.io/github/release/binary-husky/gpt_academic?label=Release&style=flat-square&color=blue
36
+ [Installation-image]: https://img.shields.io/badge/dynamic/json?color=blue&url=https://raw.githubusercontent.com/binary-husky/gpt_academic/master/version&query=$.version&label=Installation&style=flat-square
37
+ [Wiki-image]: https://img.shields.io/badge/wiki-项目文档-black?style=flat-square
38
+ [PRs-image]: https://img.shields.io/badge/PRs-welcome-pink?style=flat-square
39
+
40
+ [Github-url]: https://github.com/binary-husky/gpt_academic
41
+ [License-url]: https://github.com/binary-husky/gpt_academic/blob/master/LICENSE
42
+ [Releases-url]: https://github.com/binary-husky/gpt_academic/releases
43
+ [Installation-url]: https://github.com/binary-husky/gpt_academic#installation
44
+ [Wiki-url]: https://github.com/binary-husky/gpt_academic/wiki
45
+ [PRs-url]: https://github.com/binary-husky/gpt_academic/pulls
46
+
47
+
48
+ </div>
49
+ <br>
50
 
51
  **如果喜欢这个项目,请给它一个Star;如果您发明了好用的快捷键或插件,欢迎发pull requests!**
52
 
53
+ If you like this project, please give it a Star.
54
+ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanese.md) | [한국어](docs/README.Korean.md) | [Русский](docs/README.Russian.md) | [Français](docs/README.French.md). All translations have been provided by the project itself. To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
55
+ <br>
56
+
57
 
 
 
58
  > 1.请注意只有 **高亮** 标识的插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
59
  >
60
+ > 2.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
61
+ > [![常规安装方法](https://img.shields.io/static/v1?label=&message=常规安装方法&color=gray)](#installation) [![一键安装脚本](https://img.shields.io/static/v1?label=&message=一键安装脚本&color=gray)](https://github.com/binary-husky/gpt_academic/releases) [![配置说明](https://img.shields.io/static/v1?label=&message=配置说明&color=gray)](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) [![wiki](https://img.shields.io/static/v1?label=&message=wiki&color=gray)]([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
62
  >
63
+ > 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。
 
64
 
65
+ <br><br>
66
 
67
  <div align="center">
68
 
69
  功能(⭐= 近期新增功能) | 描述
70
  --- | ---
71
+ ⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱API](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
72
  润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
73
  [自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
74
  模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
75
+ [程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [插件] 一键剖析Python/C/C++/Java/Lua/...项目树 或 [自我剖析](https://www.bilibili.com/video/BV1cj411A7VW)
76
  读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要
77
  Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
78
  批量注释生成 | [插件] 一键批量生成函数注释
79
+ Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
80
  chat分析报告生成 | [插件] 运行后自动生成总结汇报
81
  [PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
82
  [Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
 
88
  公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
89
  ⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
90
  启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
91
+ [多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
92
  ⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
93
  更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
94
  ⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
95
+ ⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
96
  更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
97
  </div>
98
 
99
 
100
  - 新界面(修改`config.py`中的LAYOUT选项即可实现“左右布局”和“上下布局”的切换)
101
  <div align="center">
102
+ <img src="https://user-images.githubusercontent.com/96192199/279702205-d81137c3-affd-4cd1-bb5e-b15610389762.gif" width="700" >
103
  </div>
104
 
105
 
106
+ - 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放剪贴板
107
  <div align="center">
108
  <img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
109
  </div>
 
113
  <img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
114
  </div>
115
 
116
+ - 如果输出包含公式,会以tex形式和渲染形式同时显示,方便复制和阅读
117
  <div align="center">
118
  <img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
119
  </div>
120
 
121
+ - 懒得看项目代码?直接把整个工程炫ChatGPT嘴里
122
  <div align="center">
123
  <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
124
  </div>
125
 
126
+ - 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + GPT4)
127
  <div align="center">
128
  <img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
129
  </div>
130
 
131
+ <br><br>
132
+
133
  # Installation
134
  ### 安装方法I:直接运行 (Windows, Linux or MacOS)
135
 
 
140
  cd gpt_academic
141
  ```
142
 
143
+ 2. 配置API_KEY等变量
144
 
145
+ 在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
146
 
147
+ 「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,以确保更新或其他用户无法轻易查看您的私有配置 」。
148
 
149
+ 「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
150
 
151
 
152
  3. 安装依赖
 
179
 
180
  # 【可选步骤IV】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
181
  AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
182
+
183
+ # 【可选步骤V】���持本地模型INT8,INT4量化(这里所指的模型本身不是量化版本,目前deepseek-coder支持,后面测试后会加入更多模型量化选择)
184
+ pip install bitsandbyte
185
+ # windows用户安装bitsandbytes需要使用下面bitsandbytes-windows-webui
186
+ python -m pip install bitsandbytes --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-windows-webui
187
+ pip install -U git+https://github.com/huggingface/transformers.git
188
+ pip install -U git+https://github.com/huggingface/accelerate.git
189
+ pip install peft
190
  ```
191
 
192
  </p>
 
201
 
202
  ### 安装方法II:使用Docker
203
 
204
+ 0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小,则不推荐该方法部署完整项目)
205
  [![fullcapacity](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml)
206
 
207
  ``` sh
 
230
  ```
231
 
232
 
233
+ ### 安装方法III:其他部署方法
234
  1. **Windows一键运行脚本**。
235
+ 完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。脚本贡献来源:[oobabooga](https://github.com/oobabooga/one-click-installers)。
 
236
 
237
  2. 使用第三方API、Azure等、文心一言、星火等,见[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)
238
 
239
  3. 云服务器远程部署避坑指南。
240
  请访问[云服务器远程部署wiki](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
241
 
242
+ 4. 在其他平台部署&二级网址部署
243
  - 使用Sealos[一键部署](https://github.com/binary-husky/gpt_academic/issues/993)。
244
  - 使用WSL2(Windows Subsystem for Linux 子系统)。请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
245
  - 如何在二级网址(如`http://localhost/subpath`)下运行。请访问[FastAPI运行说明](docs/WithFastapi.md)
246
 
247
+ <br><br>
248
 
249
  # Advanced Usage
250
  ### I:自定义新的便捷按钮(学术快捷键)
251
 
252
+ 任意文本编辑器打开`core_functional.py`,添加如下条目,然后重启程序。(如果按钮已存在,那么可以直接修改(前缀、后缀都已支持热修改),无需重启程序即可生效。)
253
  例如
254
 
255
  ```python
 
271
  本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
272
  详情请参考[函数插件指南](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
273
 
274
+ <br><br>
275
 
276
  # Updates
277
  ### I:动态
 
371
 
372
  - 已知问题
373
  - 某些浏览器翻译插件干扰此软件前端的运行
374
+ - 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
375
 
376
  ### III:主题
377
  可以通过修改`THEME`选项(config.py)变更主题
 
382
 
383
  1. `master` 分支: 主分支,稳定版
384
  2. `frontier` 分支: 开发分支,测试版
385
+ 3. 如何[接入其他大模型](request_llms/README.md)
386
+ 4. 访问GPT-Academic的[在线服务并支持我们](https://github.com/binary-husky/gpt_academic/wiki/online)
387
 
388
  ### V:参考与学习
389
 
app.py CHANGED
@@ -1,6 +1,17 @@
1
  import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
2
- import pickle
3
- import base64
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  def main():
6
  import subprocess, sys
@@ -10,7 +21,7 @@ def main():
10
  raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
11
  from request_llms.bridge_all import predict
12
  from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
13
- # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
14
  proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
15
  CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
16
  ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME')
@@ -20,21 +31,11 @@ def main():
20
  # 如果WEB_PORT是-1, 则随机选取WEB端口
21
  PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
22
  from check_proxy import get_current_version
23
- from themes.theme import adjust_theme, advanced_css, theme_declaration, load_dynamic_theme
24
-
 
25
  title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
26
- description = "Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic), "
27
- description += "感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors)."
28
- description += "</br></br>常见问题请查阅[项目Wiki](https://github.com/binary-husky/gpt_academic/wiki), "
29
- description += "如遇到Bug请前往[Bug反馈](https://github.com/binary-husky/gpt_academic/issues)."
30
- description += "</br></br>普通对话使用说明: 1. 输入问题; 2. 点击提交"
31
- description += "</br></br>基础功能区使用说明: 1. 输入文本; 2. 点击任意基础功能区按钮"
32
- description += "</br></br>函数插件区使用说明: 1. 输入路径/问题, 或者上传文件; 2. 点击任意函数插件区按钮"
33
- description += "</br></br>虚空终端使用说明: 点击虚空终端, 然后根据提示输入指令, 再次点击虚空终端"
34
- description += "</br></br>如何保存对话: 点击保存当前的对话按钮"
35
- description += "</br></br>如何语音对话: 请阅读Wiki"
36
- description += "</br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交(网页刷新后失效)"
37
-
38
  # 问询记录, python 版本建议3.9+(越新越好)
39
  import logging, uuid
40
  os.makedirs(PATH_LOGGING, exist_ok=True)
@@ -88,7 +89,7 @@ def main():
88
  with gr_L2(scale=1, elem_id="gpt-panel"):
89
  with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary:
90
  with gr.Row():
91
- txt = gr.Textbox(show_label=False, lines=2, placeholder="输入问题或API密钥,输入多个密钥时,用英文逗号间隔。支持OpenAI密钥和API2D密钥共存。").style(container=False)
92
  with gr.Row():
93
  submitBtn = gr.Button("提交", elem_id="elem_submit", variant="primary")
94
  with gr.Row():
@@ -149,7 +150,7 @@ def main():
149
  with gr.Row():
150
  with gr.Tab("上传文件", elem_id="interact-panel"):
151
  gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。")
152
- file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple")
153
 
154
  with gr.Tab("更换模型 & Prompt", elem_id="interact-panel"):
155
  md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
@@ -165,39 +166,24 @@ def main():
165
  checkboxes_2 = gr.CheckboxGroup(["自定义菜单"],
166
  value=[], label="显示/隐藏自定义菜单", elem_id='cbs').style(container=False)
167
  dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
168
- dark_mode_btn.click(None, None, None, _js="""() => {
169
- if (document.querySelectorAll('.dark').length) {
170
- document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
171
- } else {
172
- document.querySelector('body').classList.add('dark');
173
- }
174
- }""",
175
  )
176
  with gr.Tab("帮助", elem_id="interact-panel"):
177
- gr.Markdown(description)
178
 
179
  with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_input_secondary:
180
  with gr.Accordion("浮动输入区", open=True, elem_id="input-panel2"):
181
  with gr.Row() as row:
182
  row.style(equal_height=True)
183
  with gr.Column(scale=10):
184
- txt2 = gr.Textbox(show_label=False, placeholder="Input question here.", lines=8, label="输入区2").style(container=False)
 
185
  with gr.Column(scale=1, min_width=40):
186
  submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
187
  resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
188
  stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
189
  clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
190
 
191
- def to_cookie_str(d):
192
- # Pickle the dictionary and encode it as a string
193
- pickled_dict = pickle.dumps(d)
194
- cookie_value = base64.b64encode(pickled_dict).decode('utf-8')
195
- return cookie_value
196
-
197
- def from_cookie_str(c):
198
- # Decode the base64-encoded string and unpickle it into a dictionary
199
- pickled_dict = base64.b64decode(c.encode('utf-8'))
200
- return pickle.loads(pickled_dict)
201
 
202
  with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
203
  with gr.Accordion("自定义菜单", open=True, elem_id="edit-panel"):
@@ -229,11 +215,11 @@ def main():
229
  else:
230
  ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)})
231
  ret.update({cookies: cookies_})
232
- try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
233
  except: persistent_cookie_ = {}
234
- persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
235
- persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
236
- ret.update({persistent_cookie: persistent_cookie_}) # write persistent cookie
237
  return ret
238
 
239
  def reflesh_btn(persistent_cookie_, cookies_):
@@ -254,10 +240,11 @@ def main():
254
  else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
255
  return ret
256
 
257
- basic_fn_load.click(reflesh_btn, [persistent_cookie, cookies],[cookies, *customize_btns.values(), *predefined_btns.values()])
258
  h = basic_fn_confirm.click(assign_btn, [persistent_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
259
  [persistent_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
260
- h.then(None, [persistent_cookie], None, _js="""(persistent_cookie)=>{setCookie("persistent_cookie", persistent_cookie, 5);}""") # save persistent cookie
 
261
 
262
  # 功能区显示开关与功能区的互动
263
  def fn_area_visibility(a):
@@ -307,8 +294,8 @@ def main():
307
  click_handle = btn.click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(btn.value)], outputs=output_combo)
308
  cancel_handles.append(click_handle)
309
  # 文件上传区,接收文件后与chatbot的互动
310
- file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies])
311
- file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies])
312
  # 函数插件-固定按钮区
313
  for k in plugins:
314
  if not plugins[k].get("AsButton", True): continue
@@ -344,18 +331,7 @@ def main():
344
  None,
345
  [secret_css],
346
  None,
347
- _js="""(css) => {
348
- var existingStyles = document.querySelectorAll("style[data-loaded-css]");
349
- for (var i = 0; i < existingStyles.length; i++) {
350
- var style = existingStyles[i];
351
- style.parentNode.removeChild(style);
352
- }
353
- var styleElement = document.createElement('style');
354
- styleElement.setAttribute('data-loaded-css', css);
355
- styleElement.innerHTML = css;
356
- document.head.appendChild(styleElement);
357
- }
358
- """
359
  )
360
  # 随变按钮的回调函数注册
361
  def route(request: gr.Request, k, *args, **kwargs):
@@ -387,27 +363,10 @@ def main():
387
  rad.feed(cookies['uuid'].hex, audio)
388
  audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
389
 
390
- def init_cookie(cookies, chatbot):
391
- # 为每一位访问的用户赋予一个独一无二的uuid编码
392
- cookies.update({'uuid': uuid.uuid4()})
393
- return cookies
394
  demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies])
395
- darkmode_js = """(dark) => {
396
- dark = dark == "True";
397
- if (document.querySelectorAll('.dark').length) {
398
- if (!dark){
399
- document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
400
- }
401
- } else {
402
- if (dark){
403
- document.querySelector('body').classList.add('dark');
404
- }
405
- }
406
- }"""
407
- load_cookie_js = """(persistent_cookie) => {
408
- return getCookie("persistent_cookie");
409
- }"""
410
- demo.load(None, inputs=None, outputs=[persistent_cookie], _js=load_cookie_js)
411
  demo.load(None, inputs=[dark_mode], outputs=None, _js=darkmode_js) # 配置暗色主题或亮色主题
412
  demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}')
413
 
@@ -418,8 +377,18 @@ def main():
418
  if DARK_MODE: print(f"\t「暗色主题已启用(支持动态切换主题)」: http://localhost:{PORT}")
419
  else: print(f"\t「亮色主题已启用(支持动态切换主题)」: http://localhost:{PORT}")
420
 
 
 
 
 
 
 
 
 
 
421
  demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", share=False, favicon_path="docs/logo.png", blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"])
422
 
 
423
  # 如果需要在二级路径下运行
424
  # CUSTOM_PATH = get_conf('CUSTOM_PATH')
425
  # if CUSTOM_PATH != "/":
 
1
  import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
2
+
3
+ help_menu_description = \
4
+ """Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic),
5
+ 感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors).
6
+ </br></br>常见问题请查阅[项目Wiki](https://github.com/binary-husky/gpt_academic/wiki),
7
+ 如遇到Bug请前往[Bug反馈](https://github.com/binary-husky/gpt_academic/issues).
8
+ </br></br>普通对话使用说明: 1. 输入问题; 2. 点击提交
9
+ </br></br>基础功能区使用说明: 1. 输入文本; 2. 点击任意基础功能区按钮
10
+ </br></br>函数插件区使用说明: 1. 输入路径/问题, 或者上传文件; 2. 点击任意函数插件区按钮
11
+ </br></br>虚空终端使用说明: 点击虚空终端, 然后根据提示输入指令, 再次点击虚空终端
12
+ </br></br>如何保存对话: 点击保存当前的对话按钮
13
+ </br></br>如何语音对话: 请阅读Wiki
14
+ </br></br>如何临时更换API_KEY: 在输入区输入临时API_KEY后提交(网页刷新后失效)"""
15
 
16
  def main():
17
  import subprocess, sys
 
21
  raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
22
  from request_llms.bridge_all import predict
23
  from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
24
+ # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
25
  proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
26
  CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
27
  ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME')
 
31
  # 如果WEB_PORT是-1, 则随机选取WEB端口
32
  PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
33
  from check_proxy import get_current_version
34
+ from themes.theme import adjust_theme, advanced_css, theme_declaration
35
+ from themes.theme import js_code_for_css_changing, js_code_for_darkmode_init, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
36
+ from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, init_cookie
37
  title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
38
+
 
 
 
 
 
 
 
 
 
 
 
39
  # 问询记录, python 版本建议3.9+(越新越好)
40
  import logging, uuid
41
  os.makedirs(PATH_LOGGING, exist_ok=True)
 
89
  with gr_L2(scale=1, elem_id="gpt-panel"):
90
  with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary:
91
  with gr.Row():
92
+ txt = gr.Textbox(show_label=False, lines=2, placeholder="输入问题或API密钥,输入多个密钥时,用英文逗号间隔。支持多个OpenAI密钥共存。").style(container=False)
93
  with gr.Row():
94
  submitBtn = gr.Button("提交", elem_id="elem_submit", variant="primary")
95
  with gr.Row():
 
150
  with gr.Row():
151
  with gr.Tab("上传文件", elem_id="interact-panel"):
152
  gr.Markdown("请上传本地文件/压缩包供“函数插件区”功能调用。请注意: 上传文件后会自动把输入区修改为相应路径。")
153
+ file_upload_2 = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload_float")
154
 
155
  with gr.Tab("更换模型 & Prompt", elem_id="interact-panel"):
156
  md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
 
166
  checkboxes_2 = gr.CheckboxGroup(["自定义菜单"],
167
  value=[], label="显示/隐藏自定义菜单", elem_id='cbs').style(container=False)
168
  dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
169
+ dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode,
 
 
 
 
 
 
170
  )
171
  with gr.Tab("帮助", elem_id="interact-panel"):
172
+ gr.Markdown(help_menu_description)
173
 
174
  with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_input_secondary:
175
  with gr.Accordion("浮动输入区", open=True, elem_id="input-panel2"):
176
  with gr.Row() as row:
177
  row.style(equal_height=True)
178
  with gr.Column(scale=10):
179
+ txt2 = gr.Textbox(show_label=False, placeholder="Input question here.",
180
+ elem_id='user_input_float', lines=8, label="输入区2").style(container=False)
181
  with gr.Column(scale=1, min_width=40):
182
  submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
183
  resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
184
  stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
185
  clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
186
 
 
 
 
 
 
 
 
 
 
 
187
 
188
  with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
189
  with gr.Accordion("自定义菜单", open=True, elem_id="edit-panel"):
 
215
  else:
216
  ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)})
217
  ret.update({cookies: cookies_})
218
+ try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
219
  except: persistent_cookie_ = {}
220
+ persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
221
+ persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
222
+ ret.update({persistent_cookie: persistent_cookie_}) # write persistent cookie
223
  return ret
224
 
225
  def reflesh_btn(persistent_cookie_, cookies_):
 
240
  else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
241
  return ret
242
 
243
+ basic_fn_load.click(reflesh_btn, [persistent_cookie, cookies], [cookies, *customize_btns.values(), *predefined_btns.values()])
244
  h = basic_fn_confirm.click(assign_btn, [persistent_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
245
  [persistent_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
246
+ # save persistent cookie
247
+ h.then(None, [persistent_cookie], None, _js="""(persistent_cookie)=>{setCookie("persistent_cookie", persistent_cookie, 5);}""")
248
 
249
  # 功能区显示开关与功能区的互动
250
  def fn_area_visibility(a):
 
294
  click_handle = btn.click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(btn.value)], outputs=output_combo)
295
  cancel_handles.append(click_handle)
296
  # 文件上传区,接收文件后与chatbot的互动
297
+ file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
298
+ file_upload_2.upload(on_file_uploaded, [file_upload_2, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]).then(None, None, None, _js=r"()=>{toast_push('上传完毕 ...'); cancel_loading_status();}")
299
  # 函数插件-固定按钮区
300
  for k in plugins:
301
  if not plugins[k].get("AsButton", True): continue
 
331
  None,
332
  [secret_css],
333
  None,
334
+ _js=js_code_for_css_changing
 
 
 
 
 
 
 
 
 
 
 
335
  )
336
  # 随变按钮的回调函数注册
337
  def route(request: gr.Request, k, *args, **kwargs):
 
363
  rad.feed(cookies['uuid'].hex, audio)
364
  audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
365
 
366
+
 
 
 
367
  demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies])
368
+ darkmode_js = js_code_for_darkmode_init
369
+ demo.load(None, inputs=None, outputs=[persistent_cookie], _js=js_code_for_persistent_cookie_init)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
370
  demo.load(None, inputs=[dark_mode], outputs=None, _js=darkmode_js) # 配置暗色主题或亮色主题
371
  demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}')
372
 
 
377
  if DARK_MODE: print(f"\t「暗色主题已启用(支持动态切换主题)」: http://localhost:{PORT}")
378
  else: print(f"\t「亮色主题已启用(支持动态切换主题)」: http://localhost:{PORT}")
379
 
380
+ def auto_updates(): time.sleep(0); auto_update()
381
+ def open_browser(): time.sleep(2); webbrowser.open_new_tab(f"http://localhost:{PORT}")
382
+ def warm_up_mods(): time.sleep(6); warm_up_modules()
383
+
384
+ threading.Thread(target=auto_updates, name="self-upgrade", daemon=True).start() # 查看自动更新
385
+ threading.Thread(target=open_browser, name="open-browser", daemon=True).start() # 打开浏览器页面
386
+ threading.Thread(target=warm_up_mods, name="warm-up", daemon=True).start() # 预热tiktoken模块
387
+
388
+ run_delayed_tasks()
389
  demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", share=False, favicon_path="docs/logo.png", blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"])
390
 
391
+
392
  # 如果需要在二级路径下运行
393
  # CUSTOM_PATH = get_conf('CUSTOM_PATH')
394
  # if CUSTOM_PATH != "/":
check_proxy.py CHANGED
@@ -159,7 +159,15 @@ def warm_up_modules():
159
  enc.encode("模块预热", disallowed_special=())
160
  enc = model_info["gpt-4"]['tokenizer']
161
  enc.encode("模块预热", disallowed_special=())
 
 
 
 
 
 
 
162
 
 
163
  if __name__ == '__main__':
164
  import os
165
  os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
 
159
  enc.encode("模块预热", disallowed_special=())
160
  enc = model_info["gpt-4"]['tokenizer']
161
  enc.encode("模块预热", disallowed_special=())
162
+
163
+ def warm_up_vectordb():
164
+ print('正在执行一些模块的预热 ...')
165
+ from toolbox import ProxyNetworkActivate
166
+ with ProxyNetworkActivate("Warmup_Modules"):
167
+ import nltk
168
+ with ProxyNetworkActivate("Warmup_Modules"): nltk.download("punkt")
169
 
170
+
171
  if __name__ == '__main__':
172
  import os
173
  os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
config.py CHANGED
@@ -19,13 +19,13 @@ API_KEY = "此处填API密钥" # 可同时填写多个API-KEY,用英文逗
19
  USE_PROXY = False
20
  if USE_PROXY:
21
  """
 
22
  填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
23
  <配置教程&视频教程> https://github.com/binary-husky/gpt_academic/issues/1>
24
  [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
25
- [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上)
26
  [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
27
  """
28
- # 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5h / http)、地址(localhost)和端口(11284)
29
  proxies = {
30
  # [协议]:// [地址] :[端口]
31
  "http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890",
@@ -70,7 +70,7 @@ LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下
70
 
71
 
72
  # 暗色模式 / 亮色模式
73
- DARK_MODE = True
74
 
75
 
76
  # 发送请求到OpenAI后,等待多久判定为超时
@@ -99,14 +99,25 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-prev
99
  "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
100
  "gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
101
  "chatglm3", "moss", "claude-2"]
102
- # P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
103
- # "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
 
104
 
105
 
106
  # 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
107
  MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
108
 
109
 
 
 
 
 
 
 
 
 
 
 
110
  # 百度千帆(LLM_MODEL="qianfan")
111
  BAIDU_CLOUD_API_KEY = ''
112
  BAIDU_CLOUD_SECRET_KEY = ''
@@ -121,7 +132,6 @@ CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b
121
  LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
122
  LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
123
 
124
-
125
  # 设置gradio的并行线程数(不需要修改)
126
  CONCURRENT_COUNT = 100
127
 
@@ -239,6 +249,10 @@ WHEN_TO_USE_PROXY = ["Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
239
  BLOCK_INVALID_APIKEY = False
240
 
241
 
 
 
 
 
242
  # 自定义按钮的最大数量限制
243
  NUM_CUSTOM_BASIC_BTN = 4
244
 
@@ -282,6 +296,9 @@ NUM_CUSTOM_BASIC_BTN = 4
282
  │ ├── ZHIPUAI_API_KEY
283
  │ └── ZHIPUAI_MODEL
284
 
 
 
 
285
  └── "newbing" Newbing接口不再稳定,不推荐使用
286
  ├── NEWBING_STYLE
287
  └── NEWBING_COOKIES
@@ -298,7 +315,7 @@ NUM_CUSTOM_BASIC_BTN = 4
298
  ├── "jittorllms_pangualpha"
299
  ├── "jittorllms_llama"
300
  ├── "deepseekcoder"
301
- ├── "qwen"
302
  ├── RWKV的支持见Wiki
303
  └── "llama2"
304
 
 
19
  USE_PROXY = False
20
  if USE_PROXY:
21
  """
22
+ 代理网络的地址,打开你的代理软件查看代理协议(socks5h / http)、地址(localhost)和端口(11284)
23
  填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
24
  <配置教程&视频教程> https://github.com/binary-husky/gpt_academic/issues/1>
25
  [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
26
+ [地址] localhost或者127.0.0.1localhost意思是代理软件安装在本机上)
27
  [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
28
  """
 
29
  proxies = {
30
  # [协议]:// [地址] :[端口]
31
  "http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890",
 
70
 
71
 
72
  # 暗色模式 / 亮色模式
73
+ DARK_MODE = False
74
 
75
 
76
  # 发送请求到OpenAI后,等待多久判定为超时
 
99
  "api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
100
  "gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
101
  "chatglm3", "moss", "claude-2"]
102
+ # P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
103
+ # "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"
104
+ # “qwen-turbo", "qwen-plus", "qwen-max"]
105
 
106
 
107
  # 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
108
  MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
109
 
110
 
111
+ # 选择本地模型变体(只有当AVAIL_LLM_MODELS包含了对应本地模型时,才会起作用)
112
+ # 如果你选择Qwen系列的模型,那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
113
+ # 也可以是具体的模型路径
114
+ QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
115
+
116
+
117
+ # 接入通义千问在线大模型 https://dashscope.console.aliyun.com/
118
+ DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
119
+
120
+
121
  # 百度千帆(LLM_MODEL="qianfan")
122
  BAIDU_CLOUD_API_KEY = ''
123
  BAIDU_CLOUD_SECRET_KEY = ''
 
132
  LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
133
  LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
134
 
 
135
  # 设置gradio的并行线程数(不需要修改)
136
  CONCURRENT_COUNT = 100
137
 
 
249
  BLOCK_INVALID_APIKEY = False
250
 
251
 
252
+ # 启用插件热加载
253
+ PLUGIN_HOT_RELOAD = False
254
+
255
+
256
  # 自定义按钮的最大数量限制
257
  NUM_CUSTOM_BASIC_BTN = 4
258
 
 
296
  │ ├── ZHIPUAI_API_KEY
297
  │ └── ZHIPUAI_MODEL
298
 
299
+ ├── "qwen-turbo" 等通义千问大模型
300
+ │ └── DASHSCOPE_API_KEY
301
+
302
  └── "newbing" Newbing接口不再稳定,不推荐使用
303
  ├── NEWBING_STYLE
304
  └── NEWBING_COOKIES
 
315
  ├── "jittorllms_pangualpha"
316
  ├── "jittorllms_llama"
317
  ├── "deepseekcoder"
318
+ ├── "qwen-local"
319
  ├── RWKV的支持见Wiki
320
  └── "llama2"
321
 
crazy_functional.py CHANGED
@@ -345,7 +345,7 @@ def get_crazy_functions():
345
  "Color": "stop",
346
  "AsButton": False,
347
  "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
348
- "ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
349
  "Function": HotReload(同时问询_指定模型)
350
  },
351
  })
@@ -354,9 +354,9 @@ def get_crazy_functions():
354
  print('Load function plugin failed')
355
 
356
  try:
357
- from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3
358
  function_plugins.update({
359
- "图片生成_DALLE2 (先切换模型到openai或api2d)": {
360
  "Group": "对话",
361
  "Color": "stop",
362
  "AsButton": False,
@@ -367,16 +367,26 @@ def get_crazy_functions():
367
  },
368
  })
369
  function_plugins.update({
370
- "图片生成_DALLE3 (先切换模型到openai或api2d)": {
371
  "Group": "对话",
372
  "Color": "stop",
373
  "AsButton": False,
374
  "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
375
- "ArgsReminder": "在这里输入分辨率, 1024x1024(默认),支持 1024x1024, 1792x1024, 1024x1792。如需生成高清图像,请输入 1024x1024-HD, 1792x1024-HD, 1024x1792-HD。", # 高级参数输入区的显示提示
376
  "Info": "使用DALLE3生成图片 | 输入参数字符串,提供图像的内容",
377
  "Function": HotReload(图片生成_DALLE3)
378
  },
379
  })
 
 
 
 
 
 
 
 
 
 
380
  except:
381
  print(trimmed_format_exc())
382
  print('Load function plugin failed')
@@ -430,7 +440,7 @@ def get_crazy_functions():
430
  print('Load function plugin failed')
431
 
432
  try:
433
- from crazy_functions.Langchain知识库 import 知识库问答
434
  function_plugins.update({
435
  "构建知识库(先上传文件素材,再运行此插件)": {
436
  "Group": "对话",
@@ -438,7 +448,7 @@ def get_crazy_functions():
438
  "AsButton": False,
439
  "AdvancedArgs": True,
440
  "ArgsReminder": "此处待注入的知识库名称id, 默认为default。文件进入知识库后可长期保存。可以通过再次调用本插件的方式,向知识库追加更多文档。",
441
- "Function": HotReload(知识库问答)
442
  }
443
  })
444
  except:
@@ -446,9 +456,9 @@ def get_crazy_functions():
446
  print('Load function plugin failed')
447
 
448
  try:
449
- from crazy_functions.Langchain知识库 import 读取知识库作答
450
  function_plugins.update({
451
- "知识库问答(构建知识库后,再运行此插件)": {
452
  "Group": "对话",
453
  "Color": "stop",
454
  "AsButton": False,
@@ -489,7 +499,7 @@ def get_crazy_functions():
489
  })
490
  from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
491
  function_plugins.update({
492
- "Arixv论文精细翻译(输入arxivID)[需Latex]": {
493
  "Group": "学术",
494
  "Color": "stop",
495
  "AsButton": False,
@@ -580,6 +590,20 @@ def get_crazy_functions():
580
  print(trimmed_format_exc())
581
  print('Load function plugin failed')
582
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
583
  # try:
584
  # from crazy_functions.chatglm微调工具 import 微调数据集生成
585
  # function_plugins.update({
 
345
  "Color": "stop",
346
  "AsButton": False,
347
  "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
348
+ "ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&gpt-4", # 高级参数输入区的显示提示
349
  "Function": HotReload(同时问询_指定模型)
350
  },
351
  })
 
354
  print('Load function plugin failed')
355
 
356
  try:
357
+ from crazy_functions.图片生成 import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
358
  function_plugins.update({
359
+ "图片生成_DALLE2 (先切换模型到gpt-*)": {
360
  "Group": "对话",
361
  "Color": "stop",
362
  "AsButton": False,
 
367
  },
368
  })
369
  function_plugins.update({
370
+ "图片生成_DALLE3 (先切换模型到gpt-*)": {
371
  "Group": "对话",
372
  "Color": "stop",
373
  "AsButton": False,
374
  "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
375
+ "ArgsReminder": "在这里输入自定义参数「分辨率-质量(可选)-风格(可选)」, 参数示例「1024x1024-hd-vivid」 || 分辨率支持 「1024x1024」(默认) /「1792x1024」/「1024x1792 || 质量支持 「-standard」(默认) /「-hd」 || 风格支持 「-vivid」(默认) /「-natural」", # 高级参数输入区的显示提示
376
  "Info": "使用DALLE3生成图片 | 输入参数字符串,提供图像的内容",
377
  "Function": HotReload(图片生成_DALLE3)
378
  },
379
  })
380
+ function_plugins.update({
381
+ "图片修改_DALLE2 (先切换模型到gpt-*)": {
382
+ "Group": "对话",
383
+ "Color": "stop",
384
+ "AsButton": False,
385
+ "AdvancedArgs": False, # 调用时,唤起高级参数输入区(默认False)
386
+ # "Info": "使用DALLE2修改图片 | 输入参数字符串,提供图像的内容",
387
+ "Function": HotReload(图片修改_DALLE2)
388
+ },
389
+ })
390
  except:
391
  print(trimmed_format_exc())
392
  print('Load function plugin failed')
 
440
  print('Load function plugin failed')
441
 
442
  try:
443
+ from crazy_functions.知识库问答 import 知识库文件注入
444
  function_plugins.update({
445
  "构建知识库(先上传文件素材,再运行此插件)": {
446
  "Group": "对话",
 
448
  "AsButton": False,
449
  "AdvancedArgs": True,
450
  "ArgsReminder": "此处待注入的知识库名称id, 默认为default。文件进入知识库后可长期保存。可以通过再次调用本插件的方式,向知识库追加更多文档。",
451
+ "Function": HotReload(知识库文件注入)
452
  }
453
  })
454
  except:
 
456
  print('Load function plugin failed')
457
 
458
  try:
459
+ from crazy_functions.知识库问答 import 读取知识库作答
460
  function_plugins.update({
461
+ "知识库文件注入(构建知识库后,再运行此插件)": {
462
  "Group": "对话",
463
  "Color": "stop",
464
  "AsButton": False,
 
499
  })
500
  from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
501
  function_plugins.update({
502
+ "Arxiv论文精细翻译(输入arxivID)[需Latex]": {
503
  "Group": "学术",
504
  "Color": "stop",
505
  "AsButton": False,
 
590
  print(trimmed_format_exc())
591
  print('Load function plugin failed')
592
 
593
+ try:
594
+ from crazy_functions.互动小游戏 import 随机小游戏
595
+ function_plugins.update({
596
+ "随机互动小游戏(仅供测试)": {
597
+ "Group": "智能体",
598
+ "Color": "stop",
599
+ "AsButton": False,
600
+ "Function": HotReload(随机小游戏)
601
+ }
602
+ })
603
+ except:
604
+ print(trimmed_format_exc())
605
+ print('Load function plugin failed')
606
+
607
  # try:
608
  # from crazy_functions.chatglm微调工具 import 微调数据集生成
609
  # function_plugins.update({
crazy_functions/Latex全文润色.py CHANGED
@@ -26,8 +26,8 @@ class PaperFileGroup():
26
  self.sp_file_index.append(index)
27
  self.sp_file_tag.append(self.file_paths[index])
28
  else:
29
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
30
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
31
  for j, segment in enumerate(segments):
32
  self.sp_file_contents.append(segment)
33
  self.sp_file_index.append(index)
 
26
  self.sp_file_index.append(index)
27
  self.sp_file_tag.append(self.file_paths[index])
28
  else:
29
+ from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
30
+ segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
31
  for j, segment in enumerate(segments):
32
  self.sp_file_contents.append(segment)
33
  self.sp_file_index.append(index)
crazy_functions/Latex全文翻译.py CHANGED
@@ -26,8 +26,8 @@ class PaperFileGroup():
26
  self.sp_file_index.append(index)
27
  self.sp_file_tag.append(self.file_paths[index])
28
  else:
29
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
30
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
31
  for j, segment in enumerate(segments):
32
  self.sp_file_contents.append(segment)
33
  self.sp_file_index.append(index)
 
26
  self.sp_file_index.append(index)
27
  self.sp_file_tag.append(self.file_paths[index])
28
  else:
29
+ from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
30
+ segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
31
  for j, segment in enumerate(segments):
32
  self.sp_file_contents.append(segment)
33
  self.sp_file_index.append(index)
crazy_functions/Latex输出PDF结果.py CHANGED
@@ -88,6 +88,9 @@ def arxiv_download(chatbot, history, txt, allow_cache=True):
88
  target_file = pj(translation_dir, 'translate_zh.pdf')
89
  if os.path.exists(target_file):
90
  promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
 
 
 
91
  return target_file
92
  return False
93
  def is_float(s):
 
88
  target_file = pj(translation_dir, 'translate_zh.pdf')
89
  if os.path.exists(target_file):
90
  promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
91
+ target_file_compare = pj(translation_dir, 'comparison.pdf')
92
+ if os.path.exists(target_file_compare):
93
+ promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
94
  return target_file
95
  return False
96
  def is_float(s):
crazy_functions/crazy_utils.py CHANGED
@@ -1,4 +1,4 @@
1
- from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token
2
  import threading
3
  import os
4
  import logging
@@ -139,6 +139,8 @@ def can_multi_process(llm):
139
  if llm.startswith('gpt-'): return True
140
  if llm.startswith('api2d-'): return True
141
  if llm.startswith('azure-'): return True
 
 
142
  return False
143
 
144
  def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
@@ -312,95 +314,6 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
312
  return gpt_response_collection
313
 
314
 
315
- def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit):
316
- def cut(txt_tocut, must_break_at_empty_line): # 递归
317
- if get_token_fn(txt_tocut) <= limit:
318
- return [txt_tocut]
319
- else:
320
- lines = txt_tocut.split('\n')
321
- estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
322
- estimated_line_cut = int(estimated_line_cut)
323
- for cnt in reversed(range(estimated_line_cut)):
324
- if must_break_at_empty_line:
325
- if lines[cnt] != "":
326
- continue
327
- print(cnt)
328
- prev = "\n".join(lines[:cnt])
329
- post = "\n".join(lines[cnt:])
330
- if get_token_fn(prev) < limit:
331
- break
332
- if cnt == 0:
333
- raise RuntimeError("存在一行极长的文本!")
334
- # print(len(post))
335
- # 列表递归接龙
336
- result = [prev]
337
- result.extend(cut(post, must_break_at_empty_line))
338
- return result
339
- try:
340
- return cut(txt, must_break_at_empty_line=True)
341
- except RuntimeError:
342
- return cut(txt, must_break_at_empty_line=False)
343
-
344
-
345
- def force_breakdown(txt, limit, get_token_fn):
346
- """
347
- 当无法用标点、空行分割时,我们用最暴力的方法切割
348
- """
349
- for i in reversed(range(len(txt))):
350
- if get_token_fn(txt[:i]) < limit:
351
- return txt[:i], txt[i:]
352
- return "Tiktoken未知错误", "Tiktoken未知错误"
353
-
354
- def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit):
355
- # 递归
356
- def cut(txt_tocut, must_break_at_empty_line, break_anyway=False):
357
- if get_token_fn(txt_tocut) <= limit:
358
- return [txt_tocut]
359
- else:
360
- lines = txt_tocut.split('\n')
361
- estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
362
- estimated_line_cut = int(estimated_line_cut)
363
- cnt = 0
364
- for cnt in reversed(range(estimated_line_cut)):
365
- if must_break_at_empty_line:
366
- if lines[cnt] != "":
367
- continue
368
- prev = "\n".join(lines[:cnt])
369
- post = "\n".join(lines[cnt:])
370
- if get_token_fn(prev) < limit:
371
- break
372
- if cnt == 0:
373
- if break_anyway:
374
- prev, post = force_breakdown(txt_tocut, limit, get_token_fn)
375
- else:
376
- raise RuntimeError(f"存在一行极长的文本!{txt_tocut}")
377
- # print(len(post))
378
- # 列表递归接龙
379
- result = [prev]
380
- result.extend(cut(post, must_break_at_empty_line, break_anyway=break_anyway))
381
- return result
382
- try:
383
- # 第1次尝试,将双空行(\n\n)作为切分点
384
- return cut(txt, must_break_at_empty_line=True)
385
- except RuntimeError:
386
- try:
387
- # 第2次尝试,将单空行(\n)作为切分点
388
- return cut(txt, must_break_at_empty_line=False)
389
- except RuntimeError:
390
- try:
391
- # 第3次尝试,将英文句号(.)作为切分点
392
- res = cut(txt.replace('.', '。\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在
393
- return [r.replace('。\n', '.') for r in res]
394
- except RuntimeError as e:
395
- try:
396
- # 第4次尝试,将中文句号(。)作为切分点
397
- res = cut(txt.replace('。', '。。\n'), must_break_at_empty_line=False)
398
- return [r.replace('。。\n', '。') for r in res]
399
- except RuntimeError as e:
400
- # 第5次尝试,没办法了,随便切一下敷衍吧
401
- return cut(txt, must_break_at_empty_line=False, break_anyway=True)
402
-
403
-
404
 
405
  def read_and_clean_pdf_text(fp):
406
  """
@@ -631,90 +544,6 @@ def get_files_from_everything(txt, type): # type='.md'
631
 
632
 
633
 
634
-
635
- def Singleton(cls):
636
- _instance = {}
637
-
638
- def _singleton(*args, **kargs):
639
- if cls not in _instance:
640
- _instance[cls] = cls(*args, **kargs)
641
- return _instance[cls]
642
-
643
- return _singleton
644
-
645
-
646
- @Singleton
647
- class knowledge_archive_interface():
648
- def __init__(self) -> None:
649
- self.threadLock = threading.Lock()
650
- self.current_id = ""
651
- self.kai_path = None
652
- self.qa_handle = None
653
- self.text2vec_large_chinese = None
654
-
655
- def get_chinese_text2vec(self):
656
- if self.text2vec_large_chinese is None:
657
- # < -------------------预热文本向量化模组--------------- >
658
- from toolbox import ProxyNetworkActivate
659
- print('Checking Text2vec ...')
660
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
661
- with ProxyNetworkActivate('Download_LLM'): # 临时地激活代理网络
662
- self.text2vec_large_chinese = HuggingFaceEmbeddings(model_name="GanymedeNil/text2vec-large-chinese")
663
-
664
- return self.text2vec_large_chinese
665
-
666
-
667
- def feed_archive(self, file_manifest, id="default"):
668
- self.threadLock.acquire()
669
- # import uuid
670
- self.current_id = id
671
- from zh_langchain import construct_vector_store
672
- self.qa_handle, self.kai_path = construct_vector_store(
673
- vs_id=self.current_id,
674
- files=file_manifest,
675
- sentence_size=100,
676
- history=[],
677
- one_conent="",
678
- one_content_segmentation="",
679
- text2vec = self.get_chinese_text2vec(),
680
- )
681
- self.threadLock.release()
682
-
683
- def get_current_archive_id(self):
684
- return self.current_id
685
-
686
- def get_loaded_file(self):
687
- return self.qa_handle.get_loaded_file()
688
-
689
- def answer_with_archive_by_id(self, txt, id):
690
- self.threadLock.acquire()
691
- if not self.current_id == id:
692
- self.current_id = id
693
- from zh_langchain import construct_vector_store
694
- self.qa_handle, self.kai_path = construct_vector_store(
695
- vs_id=self.current_id,
696
- files=[],
697
- sentence_size=100,
698
- history=[],
699
- one_conent="",
700
- one_content_segmentation="",
701
- text2vec = self.get_chinese_text2vec(),
702
- )
703
- VECTOR_SEARCH_SCORE_THRESHOLD = 0
704
- VECTOR_SEARCH_TOP_K = 4
705
- CHUNK_SIZE = 512
706
- resp, prompt = self.qa_handle.get_knowledge_based_conent_test(
707
- query = txt,
708
- vs_path = self.kai_path,
709
- score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
710
- vector_search_top_k=VECTOR_SEARCH_TOP_K,
711
- chunk_conent=True,
712
- chunk_size=CHUNK_SIZE,
713
- text2vec = self.get_chinese_text2vec(),
714
- )
715
- self.threadLock.release()
716
- return resp, prompt
717
-
718
  @Singleton
719
  class nougat_interface():
720
  def __init__(self):
 
1
+ from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton
2
  import threading
3
  import os
4
  import logging
 
139
  if llm.startswith('gpt-'): return True
140
  if llm.startswith('api2d-'): return True
141
  if llm.startswith('azure-'): return True
142
+ if llm.startswith('spark'): return True
143
+ if llm.startswith('zhipuai'): return True
144
  return False
145
 
146
  def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
 
314
  return gpt_response_collection
315
 
316
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
317
 
318
  def read_and_clean_pdf_text(fp):
319
  """
 
544
 
545
 
546
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
547
  @Singleton
548
  class nougat_interface():
549
  def __init__(self):
crazy_functions/latex_fns/latex_actions.py CHANGED
@@ -175,7 +175,6 @@ class LatexPaperFileGroup():
175
  self.sp_file_contents = []
176
  self.sp_file_index = []
177
  self.sp_file_tag = []
178
-
179
  # count_token
180
  from request_llms.bridge_all import model_info
181
  enc = model_info["gpt-3.5-turbo"]['tokenizer']
@@ -192,13 +191,12 @@ class LatexPaperFileGroup():
192
  self.sp_file_index.append(index)
193
  self.sp_file_tag.append(self.file_paths[index])
194
  else:
195
- from ..crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
196
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
197
  for j, segment in enumerate(segments):
198
  self.sp_file_contents.append(segment)
199
  self.sp_file_index.append(index)
200
  self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
201
- print('Segmentation: done')
202
 
203
  def merge_result(self):
204
  self.file_result = ["" for _ in range(len(self.file_paths))]
@@ -404,7 +402,7 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
404
  result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
405
  promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
406
  if modified_pdf_success:
407
- yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 即将退出 ...', chatbot, history) # 刷新Gradio前端界面
408
  result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
409
  origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path
410
  if os.path.exists(pj(work_folder, '..', 'translation')):
@@ -416,8 +414,11 @@ def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_f
416
  from .latex_toolbox import merge_pdfs
417
  concat_pdf = pj(work_folder_modified, f'comparison.pdf')
418
  merge_pdfs(origin_pdf, result_pdf, concat_pdf)
 
 
419
  promote_file_to_downloadzone(concat_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
420
  except Exception as e:
 
421
  pass
422
  return True # 成功啦
423
  else:
 
175
  self.sp_file_contents = []
176
  self.sp_file_index = []
177
  self.sp_file_tag = []
 
178
  # count_token
179
  from request_llms.bridge_all import model_info
180
  enc = model_info["gpt-3.5-turbo"]['tokenizer']
 
191
  self.sp_file_index.append(index)
192
  self.sp_file_tag.append(self.file_paths[index])
193
  else:
194
+ from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
195
+ segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
196
  for j, segment in enumerate(segments):
197
  self.sp_file_contents.append(segment)
198
  self.sp_file_index.append(index)
199
  self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
 
200
 
201
  def merge_result(self):
202
  self.file_result = ["" for _ in range(len(self.file_paths))]
 
402
  result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
403
  promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
404
  if modified_pdf_success:
405
+ yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 正在尝试生成对比PDF, 请稍候 ...', chatbot, history) # 刷新Gradio前端界面
406
  result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
407
  origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path
408
  if os.path.exists(pj(work_folder, '..', 'translation')):
 
414
  from .latex_toolbox import merge_pdfs
415
  concat_pdf = pj(work_folder_modified, f'comparison.pdf')
416
  merge_pdfs(origin_pdf, result_pdf, concat_pdf)
417
+ if os.path.exists(pj(work_folder, '..', 'translation')):
418
+ shutil.copyfile(concat_pdf, pj(work_folder, '..', 'translation', 'comparison.pdf'))
419
  promote_file_to_downloadzone(concat_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
420
  except Exception as e:
421
+ print(e)
422
  pass
423
  return True # 成功啦
424
  else:
crazy_functions/latex_fns/latex_toolbox.py CHANGED
@@ -493,11 +493,38 @@ def compile_latex_with_timeout(command, cwd, timeout=60):
493
  return False
494
  return True
495
 
496
-
497
-
498
- def merge_pdfs(pdf1_path, pdf2_path, output_path):
499
- import PyPDF2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
500
  Percent = 0.95
 
501
  # Open the first PDF file
502
  with open(pdf1_path, 'rb') as pdf1_file:
503
  pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
@@ -531,3 +558,5 @@ def merge_pdfs(pdf1_path, pdf2_path, output_path):
531
  # Save the merged PDF file
532
  with open(output_path, 'wb') as output_file:
533
  output_writer.write(output_file)
 
 
 
493
  return False
494
  return True
495
 
496
+ def run_in_subprocess_wrapper_func(func, args, kwargs, return_dict, exception_dict):
497
+ import sys
498
+ try:
499
+ result = func(*args, **kwargs)
500
+ return_dict['result'] = result
501
+ except Exception as e:
502
+ exc_info = sys.exc_info()
503
+ exception_dict['exception'] = exc_info
504
+
505
+ def run_in_subprocess(func):
506
+ import multiprocessing
507
+ def wrapper(*args, **kwargs):
508
+ return_dict = multiprocessing.Manager().dict()
509
+ exception_dict = multiprocessing.Manager().dict()
510
+ process = multiprocessing.Process(target=run_in_subprocess_wrapper_func,
511
+ args=(func, args, kwargs, return_dict, exception_dict))
512
+ process.start()
513
+ process.join()
514
+ process.close()
515
+ if 'exception' in exception_dict:
516
+ # ooops, the subprocess ran into an exception
517
+ exc_info = exception_dict['exception']
518
+ raise exc_info[1].with_traceback(exc_info[2])
519
+ if 'result' in return_dict.keys():
520
+ # If the subprocess ran successfully, return the result
521
+ return return_dict['result']
522
+ return wrapper
523
+
524
+ def _merge_pdfs(pdf1_path, pdf2_path, output_path):
525
+ import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
526
  Percent = 0.95
527
+ # raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
528
  # Open the first PDF file
529
  with open(pdf1_path, 'rb') as pdf1_file:
530
  pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
 
558
  # Save the merged PDF file
559
  with open(output_path, 'wb') as output_file:
560
  output_writer.write(output_file)
561
+
562
+ merge_pdfs = run_in_subprocess(_merge_pdfs) # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
crazy_functions/multi_stage/multi_stage_utils.py CHANGED
@@ -1,6 +1,7 @@
1
  from pydantic import BaseModel, Field
2
  from typing import List
3
  from toolbox import update_ui_lastest_msg, disable_auto_promotion
 
4
  from request_llms.bridge_all import predict_no_ui_long_connection
5
  from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
6
  import time
@@ -21,11 +22,7 @@ class GptAcademicState():
21
  def reset(self):
22
  pass
23
 
24
- def lock_plugin(self, chatbot):
25
- chatbot._cookies['plugin_state'] = pickle.dumps(self)
26
-
27
- def unlock_plugin(self, chatbot):
28
- self.reset()
29
  chatbot._cookies['plugin_state'] = pickle.dumps(self)
30
 
31
  def set_state(self, chatbot, key, value):
@@ -40,6 +37,57 @@ class GptAcademicState():
40
  state.chatbot = chatbot
41
  return state
42
 
43
- class GatherMaterials():
44
- def __init__(self, materials) -> None:
45
- materials = ['image', 'prompt']
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  from pydantic import BaseModel, Field
2
  from typing import List
3
  from toolbox import update_ui_lastest_msg, disable_auto_promotion
4
+ from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log_folder
5
  from request_llms.bridge_all import predict_no_ui_long_connection
6
  from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
7
  import time
 
22
  def reset(self):
23
  pass
24
 
25
+ def dump_state(self, chatbot):
 
 
 
 
26
  chatbot._cookies['plugin_state'] = pickle.dumps(self)
27
 
28
  def set_state(self, chatbot, key, value):
 
37
  state.chatbot = chatbot
38
  return state
39
 
40
+
41
+ class GptAcademicGameBaseState():
42
+ """
43
+ 1. first init: __init__ ->
44
+ """
45
+ def init_game(self, chatbot, lock_plugin):
46
+ self.plugin_name = None
47
+ self.callback_fn = None
48
+ self.delete_game = False
49
+ self.step_cnt = 0
50
+
51
+ def lock_plugin(self, chatbot):
52
+ if self.callback_fn is None:
53
+ raise ValueError("callback_fn is None")
54
+ chatbot._cookies['lock_plugin'] = self.callback_fn
55
+ self.dump_state(chatbot)
56
+
57
+ def get_plugin_name(self):
58
+ if self.plugin_name is None:
59
+ raise ValueError("plugin_name is None")
60
+ return self.plugin_name
61
+
62
+ def dump_state(self, chatbot):
63
+ chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = pickle.dumps(self)
64
+
65
+ def set_state(self, chatbot, key, value):
66
+ setattr(self, key, value)
67
+ chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = pickle.dumps(self)
68
+
69
+ @staticmethod
70
+ def sync_state(chatbot, llm_kwargs, cls, plugin_name, callback_fn, lock_plugin=True):
71
+ state = chatbot._cookies.get(f'plugin_state/{plugin_name}', None)
72
+ if state is not None:
73
+ state = pickle.loads(state)
74
+ else:
75
+ state = cls()
76
+ state.init_game(chatbot, lock_plugin)
77
+ state.plugin_name = plugin_name
78
+ state.llm_kwargs = llm_kwargs
79
+ state.chatbot = chatbot
80
+ state.callback_fn = callback_fn
81
+ return state
82
+
83
+ def continue_game(self, prompt, chatbot, history):
84
+ # 游戏主体
85
+ yield from self.step(prompt, chatbot, history)
86
+ self.step_cnt += 1
87
+ # 保存状态,收尾
88
+ self.dump_state(chatbot)
89
+ # 如果游戏结束,清理
90
+ if self.delete_game:
91
+ chatbot._cookies['lock_plugin'] = None
92
+ chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = None
93
+ yield from update_ui(chatbot=chatbot, history=history)
crazy_functions/pdf_fns/parse_pdf.py CHANGED
@@ -74,7 +74,7 @@ def produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chat
74
 
75
  def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG):
76
  from crazy_functions.pdf_fns.report_gen_html import construct_html
77
- from crazy_functions.crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
78
  from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
79
  from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
80
 
@@ -116,7 +116,7 @@ def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_fi
116
  # find a smooth token limit to achieve even seperation
117
  count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
118
  token_limit_smooth = raw_token_num // count + count
119
- return breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn=get_token_num, limit=token_limit_smooth)
120
 
121
  for section in article_dict.get('sections'):
122
  if len(section['text']) == 0: continue
 
74
 
75
  def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG):
76
  from crazy_functions.pdf_fns.report_gen_html import construct_html
77
+ from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
78
  from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
79
  from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
80
 
 
116
  # find a smooth token limit to achieve even seperation
117
  count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
118
  token_limit_smooth = raw_token_num // count + count
119
+ return breakdown_text_to_satisfy_token_limit(txt, limit=token_limit_smooth, llm_model=llm_kwargs['llm_model'])
120
 
121
  for section in article_dict.get('sections'):
122
  if len(section['text']) == 0: continue
crazy_functions/图片生成.py CHANGED
@@ -2,7 +2,7 @@ from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log
2
  from crazy_functions.multi_stage.multi_stage_utils import GptAcademicState
3
 
4
 
5
- def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", quality=None):
6
  import requests, json, time, os
7
  from request_llms.bridge_all import model_info
8
 
@@ -25,7 +25,10 @@ def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", qual
25
  'model': model,
26
  'response_format': 'url'
27
  }
28
- if quality is not None: data.update({'quality': quality})
 
 
 
29
  response = requests.post(url, headers=headers, json=data, proxies=proxies)
30
  print(response.content)
31
  try:
@@ -54,19 +57,25 @@ def edit_image(llm_kwargs, prompt, image_path, resolution="1024x1024", model="da
54
  img_endpoint = chat_endpoint.replace('chat/completions','images/edits')
55
  # # Generate the image
56
  url = img_endpoint
 
57
  headers = {
58
  'Authorization': f"Bearer {api_key}",
59
- 'Content-Type': 'application/json'
60
- }
61
- data = {
62
- 'image': open(image_path, 'rb'),
63
- 'prompt': prompt,
64
- 'n': 1,
65
- 'size': resolution,
66
- 'model': model,
67
- 'response_format': 'url'
68
  }
69
- response = requests.post(url, headers=headers, json=data, proxies=proxies)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
  print(response.content)
71
  try:
72
  image_url = json.loads(response.content.decode('utf8'))['data'][0]['url']
@@ -95,7 +104,11 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
95
  web_port 当前软件运行的端口号
96
  """
97
  history = [] # 清空历史,以免输入溢出
98
- chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
 
 
 
 
99
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
100
  if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
101
  resolution = plugin_kwargs.get("advanced_arg", '1024x1024')
@@ -112,16 +125,25 @@ def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
112
  @CatchException
113
  def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
114
  history = [] # 清空历史,以免输入溢出
115
- chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
 
 
 
 
116
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
117
  if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
118
- resolution = plugin_kwargs.get("advanced_arg", '1024x1024').lower()
119
- if resolution.endswith('-hd'):
120
- resolution = resolution.replace('-hd', '')
121
- quality = 'hd'
122
- else:
123
- quality = 'standard'
124
- image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality)
 
 
 
 
 
125
  chatbot.append([prompt,
126
  f'图像中转网址: <br/>`{image_url}`<br/>'+
127
  f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
@@ -130,6 +152,7 @@ def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
130
  ])
131
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
132
 
 
133
  class ImageEditState(GptAcademicState):
134
  # 尚未完成
135
  def get_image_file(self, x):
@@ -142,18 +165,27 @@ class ImageEditState(GptAcademicState):
142
  file = None if not confirm else file_manifest[0]
143
  return confirm, file
144
 
 
 
 
 
 
 
 
 
 
145
  def get_resolution(self, x):
146
  return (x in ['256x256', '512x512', '1024x1024']), x
147
-
148
  def get_prompt(self, x):
149
  confirm = (len(x)>=5) and (not self.get_resolution(x)[0]) and (not self.get_image_file(x)[0])
150
  return confirm, x
151
-
152
  def reset(self):
153
  self.req = [
154
- {'value':None, 'description': '请先上传图像(必须是.png格式), 然后再次点击本插件', 'verify_fn': self.get_image_file},
155
- {'value':None, 'description': '请输入分辨率,可选:256x256, 512x512 或 1024x1024', 'verify_fn': self.get_resolution},
156
- {'value':None, 'description': '请输入修改需求,建议您使用英文提示词', 'verify_fn': self.get_prompt},
157
  ]
158
  self.info = ""
159
 
@@ -163,7 +195,7 @@ class ImageEditState(GptAcademicState):
163
  confirm, res = r['verify_fn'](prompt)
164
  if confirm:
165
  r['value'] = res
166
- self.set_state(chatbot, 'dummy_key', 'dummy_value')
167
  break
168
  return self
169
 
@@ -182,23 +214,63 @@ def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, sys
182
  history = [] # 清空历史
183
  state = ImageEditState.get_state(chatbot, ImageEditState)
184
  state = state.feed(prompt, chatbot)
 
185
  if not state.already_obtained_all_materials():
186
- chatbot.append(["图片修改(先上传图片,再输入修改需求,最后输入分辨率)", state.next_req()])
187
  yield from update_ui(chatbot=chatbot, history=history)
188
  return
189
 
190
- image_path = state.req[0]
191
- resolution = state.req[1]
192
- prompt = state.req[2]
193
  chatbot.append(["图片修改, 执行中", f"图片:`{image_path}`<br/>分辨率:`{resolution}`<br/>修改需求:`{prompt}`"])
194
  yield from update_ui(chatbot=chatbot, history=history)
195
-
196
  image_url, image_path = edit_image(llm_kwargs, prompt, image_path, resolution)
197
- chatbot.append([state.prompt,
198
  f'图像中转网址: <br/>`{image_url}`<br/>'+
199
  f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
200
  f'本地文件地址: <br/>`{image_path}`<br/>'+
201
  f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
202
  ])
203
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
204
 
 
 
 
 
 
 
 
 
 
2
  from crazy_functions.multi_stage.multi_stage_utils import GptAcademicState
3
 
4
 
5
+ def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", quality=None, style=None):
6
  import requests, json, time, os
7
  from request_llms.bridge_all import model_info
8
 
 
25
  'model': model,
26
  'response_format': 'url'
27
  }
28
+ if quality is not None:
29
+ data['quality'] = quality
30
+ if style is not None:
31
+ data['style'] = style
32
  response = requests.post(url, headers=headers, json=data, proxies=proxies)
33
  print(response.content)
34
  try:
 
57
  img_endpoint = chat_endpoint.replace('chat/completions','images/edits')
58
  # # Generate the image
59
  url = img_endpoint
60
+ n = 1
61
  headers = {
62
  'Authorization': f"Bearer {api_key}",
 
 
 
 
 
 
 
 
 
63
  }
64
+ make_transparent(image_path, image_path+'.tsp.png')
65
+ make_square_image(image_path+'.tsp.png', image_path+'.tspsq.png')
66
+ resize_image(image_path+'.tspsq.png', image_path+'.ready.png', max_size=1024)
67
+ image_path = image_path+'.ready.png'
68
+ with open(image_path, 'rb') as f:
69
+ file_content = f.read()
70
+ files = {
71
+ 'image': (os.path.basename(image_path), file_content),
72
+ # 'mask': ('mask.png', open('mask.png', 'rb'))
73
+ 'prompt': (None, prompt),
74
+ "n": (None, str(n)),
75
+ 'size': (None, resolution),
76
+ }
77
+
78
+ response = requests.post(url, headers=headers, files=files, proxies=proxies)
79
  print(response.content)
80
  try:
81
  image_url = json.loads(response.content.decode('utf8'))['data'][0]['url']
 
104
  web_port 当前软件运行的端口号
105
  """
106
  history = [] # 清空历史,以免输入溢出
107
+ if prompt.strip() == "":
108
+ chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
109
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
110
+ return
111
+ chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
112
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
113
  if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
114
  resolution = plugin_kwargs.get("advanced_arg", '1024x1024')
 
125
  @CatchException
126
  def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
127
  history = [] # 清空历史,以免输入溢出
128
+ if prompt.strip() == "":
129
+ chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
130
+ yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
131
+ return
132
+ chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 请先把模型切换至gpt-*。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
133
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
134
  if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
135
+ resolution_arg = plugin_kwargs.get("advanced_arg", '1024x1024-standard-vivid').lower()
136
+ parts = resolution_arg.split('-')
137
+ resolution = parts[0] # 解析分辨率
138
+ quality = 'standard' # 质量与风格默认值
139
+ style = 'vivid'
140
+ # 遍历检查是否有额外参数
141
+ for part in parts[1:]:
142
+ if part in ['hd', 'standard']:
143
+ quality = part
144
+ elif part in ['vivid', 'natural']:
145
+ style = part
146
+ image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality, style=style)
147
  chatbot.append([prompt,
148
  f'图像中转网址: <br/>`{image_url}`<br/>'+
149
  f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
 
152
  ])
153
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
154
 
155
+
156
  class ImageEditState(GptAcademicState):
157
  # 尚未完成
158
  def get_image_file(self, x):
 
165
  file = None if not confirm else file_manifest[0]
166
  return confirm, file
167
 
168
+ def lock_plugin(self, chatbot):
169
+ chatbot._cookies['lock_plugin'] = 'crazy_functions.图片生成->图片修改_DALLE2'
170
+ self.dump_state(chatbot)
171
+
172
+ def unlock_plugin(self, chatbot):
173
+ self.reset()
174
+ chatbot._cookies['lock_plugin'] = None
175
+ self.dump_state(chatbot)
176
+
177
  def get_resolution(self, x):
178
  return (x in ['256x256', '512x512', '1024x1024']), x
179
+
180
  def get_prompt(self, x):
181
  confirm = (len(x)>=5) and (not self.get_resolution(x)[0]) and (not self.get_image_file(x)[0])
182
  return confirm, x
183
+
184
  def reset(self):
185
  self.req = [
186
+ {'value':None, 'description': '请先上传图像(必须是.png格式), 然后再次点击本插件', 'verify_fn': self.get_image_file},
187
+ {'value':None, 'description': '请输入分辨率,可选:256x256, 512x512 或 1024x1024, 然后再次点击本插件', 'verify_fn': self.get_resolution},
188
+ {'value':None, 'description': '请输入修改需求,建议您使用英文提示词, 然后再次点击本插件', 'verify_fn': self.get_prompt},
189
  ]
190
  self.info = ""
191
 
 
195
  confirm, res = r['verify_fn'](prompt)
196
  if confirm:
197
  r['value'] = res
198
+ self.dump_state(chatbot)
199
  break
200
  return self
201
 
 
214
  history = [] # 清空历史
215
  state = ImageEditState.get_state(chatbot, ImageEditState)
216
  state = state.feed(prompt, chatbot)
217
+ state.lock_plugin(chatbot)
218
  if not state.already_obtained_all_materials():
219
+ chatbot.append(["图片修改\n\n1. 上传图片(图片中需要修改的位置用橡皮擦擦除为纯白色,即RGB=255,255,255)\n2. 输入分辨率 \n3. 输入修改需求", state.next_req()])
220
  yield from update_ui(chatbot=chatbot, history=history)
221
  return
222
 
223
+ image_path = state.req[0]['value']
224
+ resolution = state.req[1]['value']
225
+ prompt = state.req[2]['value']
226
  chatbot.append(["图片修改, 执行中", f"图片:`{image_path}`<br/>分辨率:`{resolution}`<br/>修改需求:`{prompt}`"])
227
  yield from update_ui(chatbot=chatbot, history=history)
 
228
  image_url, image_path = edit_image(llm_kwargs, prompt, image_path, resolution)
229
+ chatbot.append([prompt,
230
  f'图像中转网址: <br/>`{image_url}`<br/>'+
231
  f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
232
  f'本地文件地址: <br/>`{image_path}`<br/>'+
233
  f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
234
  ])
235
  yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
236
+ state.unlock_plugin(chatbot)
237
+
238
+ def make_transparent(input_image_path, output_image_path):
239
+ from PIL import Image
240
+ image = Image.open(input_image_path)
241
+ image = image.convert("RGBA")
242
+ data = image.getdata()
243
+ new_data = []
244
+ for item in data:
245
+ if item[0] == 255 and item[1] == 255 and item[2] == 255:
246
+ new_data.append((255, 255, 255, 0))
247
+ else:
248
+ new_data.append(item)
249
+ image.putdata(new_data)
250
+ image.save(output_image_path, "PNG")
251
+
252
+ def resize_image(input_path, output_path, max_size=1024):
253
+ from PIL import Image
254
+ with Image.open(input_path) as img:
255
+ width, height = img.size
256
+ if width > max_size or height > max_size:
257
+ if width >= height:
258
+ new_width = max_size
259
+ new_height = int((max_size / width) * height)
260
+ else:
261
+ new_height = max_size
262
+ new_width = int((max_size / height) * width)
263
+
264
+ resized_img = img.resize(size=(new_width, new_height))
265
+ resized_img.save(output_path)
266
+ else:
267
+ img.save(output_path)
268
 
269
+ def make_square_image(input_path, output_path):
270
+ from PIL import Image
271
+ with Image.open(input_path) as img:
272
+ width, height = img.size
273
+ size = max(width, height)
274
+ new_img = Image.new("RGBA", (size, size), color="black")
275
+ new_img.paste(img, ((size - width) // 2, (size - height) // 2))
276
+ new_img.save(output_path)
crazy_functions/总结word文档.py CHANGED
@@ -29,17 +29,12 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
29
  except:
30
  raise RuntimeError('请先将.doc文档转换为.docx文档。')
31
 
32
- print(file_content)
33
  # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
34
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
35
  from request_llms.bridge_all import model_info
36
  max_token = model_info[llm_kwargs['llm_model']]['max_token']
37
  TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4
38
- paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
39
- txt=file_content,
40
- get_token_fn=model_info[llm_kwargs['llm_model']]['token_cnt'],
41
- limit=TOKEN_LIMIT_PER_FRAGMENT
42
- )
43
  this_paper_history = []
44
  for i, paper_frag in enumerate(paper_fragments):
45
  i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```'
 
29
  except:
30
  raise RuntimeError('请先将.doc文档转换为.docx文档。')
31
 
 
32
  # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
33
+ from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
34
  from request_llms.bridge_all import model_info
35
  max_token = model_info[llm_kwargs['llm_model']]['max_token']
36
  TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4
37
+ paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
 
 
 
 
38
  this_paper_history = []
39
  for i, paper_frag in enumerate(paper_fragments):
40
  i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```'
crazy_functions/批量Markdown翻译.py CHANGED
@@ -28,8 +28,8 @@ class PaperFileGroup():
28
  self.sp_file_index.append(index)
29
  self.sp_file_tag.append(self.file_paths[index])
30
  else:
31
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
32
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
33
  for j, segment in enumerate(segments):
34
  self.sp_file_contents.append(segment)
35
  self.sp_file_index.append(index)
 
28
  self.sp_file_index.append(index)
29
  self.sp_file_tag.append(self.file_paths[index])
30
  else:
31
+ from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
32
+ segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
33
  for j, segment in enumerate(segments):
34
  self.sp_file_contents.append(segment)
35
  self.sp_file_index.append(index)
crazy_functions/批量总结PDF文档.py CHANGED
@@ -20,14 +20,9 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
20
 
21
  TOKEN_LIMIT_PER_FRAGMENT = 2500
22
 
23
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
24
- from request_llms.bridge_all import model_info
25
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
26
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
27
- paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
28
- txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
29
- page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
30
- txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
31
  # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
32
  paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
33
 
 
20
 
21
  TOKEN_LIMIT_PER_FRAGMENT = 2500
22
 
23
+ from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
24
+ paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
25
+ page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
 
 
 
 
 
26
  # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
27
  paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
28
 
crazy_functions/批量翻译PDF文档_多线程.py CHANGED
@@ -91,14 +91,9 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
91
  page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
92
 
93
  # 递归地切割PDF文件
94
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
95
- from request_llms.bridge_all import model_info
96
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
97
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
98
- paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
99
- txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
100
- page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
101
- txt=page_one, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
102
 
103
  # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
104
  paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
 
91
  page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
92
 
93
  # 递归地切割PDF文件
94
+ from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
95
+ paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
96
+ page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=page_one, limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
 
 
 
 
 
97
 
98
  # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
99
  paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
crazy_functions/理解PDF文档内容.py CHANGED
@@ -18,14 +18,9 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
18
 
19
  TOKEN_LIMIT_PER_FRAGMENT = 2500
20
 
21
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
22
- from request_llms.bridge_all import model_info
23
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
24
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
25
- paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
26
- txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
27
- page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
28
- txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
29
  # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
30
  paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
31
 
@@ -45,7 +40,7 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
45
  for i in range(n_fragment):
46
  NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment
47
  i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}"
48
- i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]}"
49
  gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
50
  llm_kwargs, chatbot,
51
  history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
 
18
 
19
  TOKEN_LIMIT_PER_FRAGMENT = 2500
20
 
21
+ from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
22
+ paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
23
+ page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=str(page_one), limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
 
 
 
 
 
24
  # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
25
  paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
26
 
 
40
  for i in range(n_fragment):
41
  NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment
42
  i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}"
43
+ i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]} ...."
44
  gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
45
  llm_kwargs, chatbot,
46
  history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
crazy_functions/解析JupyterNotebook.py CHANGED
@@ -12,13 +12,6 @@ class PaperFileGroup():
12
  self.sp_file_index = []
13
  self.sp_file_tag = []
14
 
15
- # count_token
16
- from request_llms.bridge_all import model_info
17
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
18
- def get_token_num(txt): return len(
19
- enc.encode(txt, disallowed_special=()))
20
- self.get_token_num = get_token_num
21
-
22
  def run_file_split(self, max_token_limit=1900):
23
  """
24
  将长文本分离开来
@@ -29,9 +22,8 @@ class PaperFileGroup():
29
  self.sp_file_index.append(index)
30
  self.sp_file_tag.append(self.file_paths[index])
31
  else:
32
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
33
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(
34
- file_content, self.get_token_num, max_token_limit)
35
  for j, segment in enumerate(segments):
36
  self.sp_file_contents.append(segment)
37
  self.sp_file_index.append(index)
 
12
  self.sp_file_index = []
13
  self.sp_file_tag = []
14
 
 
 
 
 
 
 
 
15
  def run_file_split(self, max_token_limit=1900):
16
  """
17
  将长文本分离开来
 
22
  self.sp_file_index.append(index)
23
  self.sp_file_tag.append(self.file_paths[index])
24
  else:
25
+ from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
26
+ segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
 
27
  for j, segment in enumerate(segments):
28
  self.sp_file_contents.append(segment)
29
  self.sp_file_index.append(index)
docs/translate_english.json CHANGED
@@ -923,7 +923,7 @@
923
  "的第": "The",
924
  "个片段": "fragment",
925
  "总结文章": "Summarize the article",
926
- "根据以上的对话": "According to the above dialogue",
927
  "的主要内容": "The main content of",
928
  "所有文件都总结完成了吗": "Are all files summarized?",
929
  "如果是.doc文件": "If it is a .doc file",
@@ -1501,7 +1501,7 @@
1501
  "发送请求到OpenAI后": "After sending the request to OpenAI",
1502
  "上下布局": "Vertical Layout",
1503
  "左右布局": "Horizontal Layout",
1504
- "对话窗的高度": "Height of the Dialogue Window",
1505
  "重试的次数限制": "Retry Limit",
1506
  "gpt4现在只对申请成功的人开放": "GPT-4 is now only open to those who have successfully applied",
1507
  "提高限制请查询": "Please check for higher limits",
@@ -2183,9 +2183,8 @@
2183
  "找不到合适插件执行该任务": "Cannot find a suitable plugin to perform this task",
2184
  "接驳VoidTerminal": "Connect to VoidTerminal",
2185
  "**很好": "**Very good",
2186
- "对话|编程": "Conversation|Programming",
2187
- "对话|编程|学术": "Conversation|Programming|Academic",
2188
- "4. 建议使用 GPT3.5 或更强的模型": "4. It is recommended to use GPT3.5 or a stronger model",
2189
  "「请调用插件翻译PDF论文": "Please call the plugin to translate the PDF paper",
2190
  "3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词": "3. If you use keywords such as 'call plugin xxx', 'modify configuration xxx', 'please', etc.",
2191
  "以下是一篇学术论文的基本信息": "The following is the basic information of an academic paper",
@@ -2630,7 +2629,7 @@
2630
  "已经被记忆": "Already memorized",
2631
  "默认用英文的": "Default to English",
2632
  "错误追踪": "Error tracking",
2633
- "对话|编程|学术|智能体": "Dialogue|Programming|Academic|Intelligent agent",
2634
  "请检查": "Please check",
2635
  "检测到被滞留的缓存文档": "Detected cached documents being left behind",
2636
  "还有哪些场合允许使用代理": "What other occasions allow the use of proxies",
@@ -2864,7 +2863,7 @@
2864
  "加载API_KEY": "Loading API_KEY",
2865
  "协助您编写代码": "Assist you in writing code",
2866
  "我可以为您提供以下服务": "I can provide you with the following services",
2867
- "排队中请稍后 ...": "Please wait in line ...",
2868
  "建议您使用英文提示词": "It is recommended to use English prompts",
2869
  "不能支撑AutoGen运行": "Cannot support AutoGen operation",
2870
  "帮助您解决编程问题": "Help you solve programming problems",
@@ -2903,5 +2902,107 @@
2903
  "高优先级": "High priority",
2904
  "请配置ZHIPUAI_API_KEY": "Please configure ZHIPUAI_API_KEY",
2905
  "单个azure模型": "Single Azure model",
2906
- "预留参数 context 未实现": "Reserved parameter 'context' not implemented"
2907
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
923
  "的第": "The",
924
  "个片段": "fragment",
925
  "总结文章": "Summarize the article",
926
+ "根据以上的对话": "According to the conversation above",
927
  "的主要内容": "The main content of",
928
  "所有文件都总结完成了吗": "Are all files summarized?",
929
  "如果是.doc文件": "If it is a .doc file",
 
1501
  "发送请求到OpenAI后": "After sending the request to OpenAI",
1502
  "上下布局": "Vertical Layout",
1503
  "左右布局": "Horizontal Layout",
1504
+ "对话窗的高度": "Height of the Conversation Window",
1505
  "重试的次数限制": "Retry Limit",
1506
  "gpt4现在只对申请成功的人开放": "GPT-4 is now only open to those who have successfully applied",
1507
  "提高限制请查询": "Please check for higher limits",
 
2183
  "找不到合适插件执行该任务": "Cannot find a suitable plugin to perform this task",
2184
  "接驳VoidTerminal": "Connect to VoidTerminal",
2185
  "**很好": "**Very good",
2186
+ "对话|编程": "Conversation&ImageGenerating|Programming",
2187
+ "对话|编程|学术": "Conversation&ImageGenerating|Programming|Academic", "4. 建议使用 GPT3.5 或更强的模型": "4. It is recommended to use GPT3.5 or a stronger model",
 
2188
  "「请调用插件翻译PDF论文": "Please call the plugin to translate the PDF paper",
2189
  "3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词": "3. If you use keywords such as 'call plugin xxx', 'modify configuration xxx', 'please', etc.",
2190
  "以下是一篇学术论文的基本信息": "The following is the basic information of an academic paper",
 
2629
  "已经被记忆": "Already memorized",
2630
  "默认用英文的": "Default to English",
2631
  "错误追踪": "Error tracking",
2632
+ "对话&编程|编程|学术|智能体": "Conversation&ImageGenerating|Programming|Academic|Intelligent agent",
2633
  "请检查": "Please check",
2634
  "检测到被滞留的缓存文档": "Detected cached documents being left behind",
2635
  "还有哪些场合允许使用代理": "What other occasions allow the use of proxies",
 
2863
  "加载API_KEY": "Loading API_KEY",
2864
  "协助您编写代码": "Assist you in writing code",
2865
  "我可以为您提供以下服务": "I can provide you with the following services",
2866
+ "排队中请稍候 ...": "Please wait in line ...",
2867
  "建议您使用英文提示词": "It is recommended to use English prompts",
2868
  "不能支撑AutoGen运行": "Cannot support AutoGen operation",
2869
  "帮助您解决编程问题": "Help you solve programming problems",
 
2902
  "高优先级": "High priority",
2903
  "请配置ZHIPUAI_API_KEY": "Please configure ZHIPUAI_API_KEY",
2904
  "单个azure模型": "Single Azure model",
2905
+ "预留参数 context 未实现": "Reserved parameter 'context' not implemented",
2906
+ "在输入区输入临时API_KEY后提交": "Submit after entering temporary API_KEY in the input area",
2907
+ "鸟": "Bird",
2908
+ "图片中需要修改的位置用橡皮擦擦除为纯白色": "Erase the areas in the image that need to be modified with an eraser to pure white",
2909
+ "└── PDF文档精准解析": "└── Accurate parsing of PDF documents",
2910
+ "└── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置": "└── ALLOW_RESET_CONFIG Whether to allow modifying the configuration of this page through natural language description",
2911
+ "等待指令": "Waiting for instructions",
2912
+ "不存在": "Does not exist",
2913
+ "选择游戏": "Select game",
2914
+ "本地大模型示意图": "Local large model diagram",
2915
+ "无视此消息即可": "You can ignore this message",
2916
+ "即RGB=255": "That is, RGB=255",
2917
+ "如需追问": "If you have further questions",
2918
+ "也可以是具体的模型路径": "It can also be a specific model path",
2919
+ "才会起作用": "Will take effect",
2920
+ "下载失败": "Download failed",
2921
+ "网页刷新后失效": "Invalid after webpage refresh",
2922
+ "crazy_functions.互动小游戏-": "crazy_functions.Interactive mini game-",
2923
+ "右对齐": "Right alignment",
2924
+ "您可以调用下拉菜单中的“LoadConversationHistoryArchive”还原当下的对话": "You can use the 'LoadConversationHistoryArchive' in the drop-down menu to restore the current conversation",
2925
+ "左对齐": "Left alignment",
2926
+ "使用默认的 FP16": "Use default FP16",
2927
+ "一小时": "One hour",
2928
+ "从而方便内存的释放": "Thus facilitating memory release",
2929
+ "如何临时更换API_KEY": "How to temporarily change API_KEY",
2930
+ "请输入 1024x1024-HD": "Please enter 1024x1024-HD",
2931
+ "使用 INT8 量化": "Use INT8 quantization",
2932
+ "3. 输入修改需求": "3. Enter modification requirements",
2933
+ "刷新界面 由于请求gpt需要一段时间": "Refreshing the interface takes some time due to the request for gpt",
2934
+ "随机小游戏": "Random mini game",
2935
+ "那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型": "So please specify the specific model in QWEN_MODEL_SELECTION below",
2936
+ "表值": "Table value",
2937
+ "我画你猜": "I draw, you guess",
2938
+ "狗": "Dog",
2939
+ "2. 输入分辨率": "2. Enter resolution",
2940
+ "鱼": "Fish",
2941
+ "尚未完成": "Not yet completed",
2942
+ "表头": "Table header",
2943
+ "填localhost或者127.0.0.1": "Fill in localhost or 127.0.0.1",
2944
+ "请上传jpg格式的图片": "Please upload images in jpg format",
2945
+ "API_URL_REDIRECT填写格式是错误的": "The format of API_URL_REDIRECT is incorrect",
2946
+ "├── RWKV的支持见Wiki": "Support for RWKV is available in the Wiki",
2947
+ "如果中文Prompt效果不理想": "If the Chinese prompt is not effective",
2948
+ "/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix": "/SEAFILE_LOCAL/50503047/My Library/Degree/paperlatex/aaai/Fu_8368_with_appendix",
2949
+ "只有当AVAIL_LLM_MODELS包含了对应本地模型时": "Only when AVAIL_LLM_MODELS contains the corresponding local model",
2950
+ "选择本地模型变体": "Choose the local model variant",
2951
+ "如果您确信自己没填错": "If you are sure you haven't made a mistake",
2952
+ "PyPDF2这个库有严重的内存泄露问题": "PyPDF2 library has serious memory leak issues",
2953
+ "整理文件集合 输出消息": "Organize file collection and output message",
2954
+ "没有检测到任何近期上传的图像文件": "No recently uploaded image files detected",
2955
+ "游戏结束": "Game over",
2956
+ "调用结束": "Call ended",
2957
+ "猫": "Cat",
2958
+ "请及时切换模型": "Please switch models in time",
2959
+ "次中": "In the meantime",
2960
+ "如需生成高清图像": "If you need to generate high-definition images",
2961
+ "CPU 模式": "CPU mode",
2962
+ "项目目录": "Project directory",
2963
+ "动物": "Animal",
2964
+ "居中对齐": "Center alignment",
2965
+ "请注意拓展名需要小写": "Please note that the extension name needs to be lowercase",
2966
+ "重试第": "Retry",
2967
+ "实验性功能": "Experimental feature",
2968
+ "猜错了": "Wrong guess",
2969
+ "打开你的代理软件查看代理协议": "Open your proxy software to view the proxy agreement",
2970
+ "您不需要再重复强调该文件的路径了": "You don't need to emphasize the file path again",
2971
+ "请阅读": "Please read",
2972
+ "请直接输入您的问题": "Please enter your question directly",
2973
+ "API_URL_REDIRECT填错了": "API_URL_REDIRECT is filled incorrectly",
2974
+ "谜底是": "The answer is",
2975
+ "第一个模型": "The first model",
2976
+ "你猜对了!": "You guessed it right!",
2977
+ "已经接收到您上传的文件": "The file you uploaded has been received",
2978
+ "您正在调用“图像生成”插件": "You are calling the 'Image Generation' plugin",
2979
+ "刷新界面 界面更新": "Refresh the interface, interface update",
2980
+ "如果之前已经初始化了游戏实例": "If the game instance has been initialized before",
2981
+ "文件": "File",
2982
+ "老鼠": "Mouse",
2983
+ "列2": "Column 2",
2984
+ "等待图片": "Waiting for image",
2985
+ "使用 INT4 量化": "Use INT4 quantization",
2986
+ "from crazy_functions.互动小游戏 import 随机小游戏": "TranslatedText",
2987
+ "游戏主体": "TranslatedText",
2988
+ "该模型不具备上下文对话能力": "TranslatedText",
2989
+ "列3": "TranslatedText",
2990
+ "清理": "TranslatedText",
2991
+ "检查量化配置": "TranslatedText",
2992
+ "如果游戏结束": "TranslatedText",
2993
+ "蛇": "TranslatedText",
2994
+ "则继续该实例;否则重新初始化": "TranslatedText",
2995
+ "e.g. cat and 猫 are the same thing": "TranslatedText",
2996
+ "第三个模型": "TranslatedText",
2997
+ "如果你选择Qwen系列的模型": "TranslatedText",
2998
+ "列4": "TranslatedText",
2999
+ "输入“exit”获取答案": "TranslatedText",
3000
+ "把它放到子进程中运行": "TranslatedText",
3001
+ "列1": "TranslatedText",
3002
+ "使用该模型需要额外依赖": "TranslatedText",
3003
+ "再试试": "TranslatedText",
3004
+ "1. 上传图片": "TranslatedText",
3005
+ "保存状态": "TranslatedText",
3006
+ "GPT-Academic对话存档": "TranslatedText",
3007
+ "Arxiv论文精细翻译": "TranslatedText"
3008
+ }
docs/translate_traditionalchinese.json CHANGED
@@ -1043,9 +1043,9 @@
1043
  "jittorllms响应异常": "jittorllms response exception",
1044
  "在项目根目录运行这两个指令": "Run these two commands in the project root directory",
1045
  "获取tokenizer": "Get tokenizer",
1046
- "chatbot 为WebUI中显示的对话列表": "chatbot is the list of dialogues displayed in WebUI",
1047
  "test_解析一个Cpp项目": "test_parse a Cpp project",
1048
- "将对话记录history以Markdown格式写入文件中": "Write the dialogue record history to a file in Markdown format",
1049
  "装饰器函数": "Decorator function",
1050
  "玫瑰色": "Rose color",
1051
  "将单空行": "刪除單行空白",
@@ -2270,4 +2270,4 @@
2270
  "标注节点的行数范围": "標註節點的行數範圍",
2271
  "默认 True": "默認 True",
2272
  "将两个PDF拼接": "將兩個PDF拼接"
2273
- }
 
1043
  "jittorllms响应异常": "jittorllms response exception",
1044
  "在项目根目录运行这两个指令": "Run these two commands in the project root directory",
1045
  "获取tokenizer": "Get tokenizer",
1046
+ "chatbot 为WebUI中显示的对话列表": "chatbot is the list of conversations displayed in WebUI",
1047
  "test_解析一个Cpp项目": "test_parse a Cpp project",
1048
+ "将对话记录history以Markdown格式写入文件中": "Write the conversations record history to a file in Markdown format",
1049
  "装饰器函数": "Decorator function",
1050
  "玫瑰色": "Rose color",
1051
  "将单空行": "刪除單行空白",
 
2270
  "标注节点的行数范围": "標註節點的行數範圍",
2271
  "默认 True": "默認 True",
2272
  "将两个PDF拼接": "將兩個PDF拼接"
2273
+ }
multi_language.py CHANGED
@@ -182,12 +182,12 @@ cached_translation = read_map_from_json(language=LANG)
182
  def trans(word_to_translate, language, special=False):
183
  if len(word_to_translate) == 0: return {}
184
  from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
185
- from toolbox import get_conf, ChatBotWithCookies
186
- proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
187
- get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
188
  llm_kwargs = {
189
- 'api_key': API_KEY,
190
- 'llm_model': LLM_MODEL,
191
  'top_p':1.0,
192
  'max_length': None,
193
  'temperature':0.4,
@@ -245,15 +245,15 @@ def trans(word_to_translate, language, special=False):
245
  def trans_json(word_to_translate, language, special=False):
246
  if len(word_to_translate) == 0: return {}
247
  from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
248
- from toolbox import get_conf, ChatBotWithCookies
249
- proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
250
- get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
251
  llm_kwargs = {
252
- 'api_key': API_KEY,
253
- 'llm_model': LLM_MODEL,
254
  'top_p':1.0,
255
  'max_length': None,
256
- 'temperature':0.1,
257
  }
258
  import random
259
  N_EACH_REQ = random.randint(16, 32)
 
182
  def trans(word_to_translate, language, special=False):
183
  if len(word_to_translate) == 0: return {}
184
  from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
185
+ from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies
186
+
187
+ cookies = load_chat_cookies()
188
  llm_kwargs = {
189
+ 'api_key': cookies['api_key'],
190
+ 'llm_model': cookies['llm_model'],
191
  'top_p':1.0,
192
  'max_length': None,
193
  'temperature':0.4,
 
245
  def trans_json(word_to_translate, language, special=False):
246
  if len(word_to_translate) == 0: return {}
247
  from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
248
+ from toolbox import get_conf, ChatBotWithCookies, load_chat_cookies
249
+
250
+ cookies = load_chat_cookies()
251
  llm_kwargs = {
252
+ 'api_key': cookies['api_key'],
253
+ 'llm_model': cookies['llm_model'],
254
  'top_p':1.0,
255
  'max_length': None,
256
+ 'temperature':0.4,
257
  }
258
  import random
259
  N_EACH_REQ = random.randint(16, 32)
request_llms/bridge_all.py CHANGED
@@ -431,16 +431,48 @@ if "chatglm_onnx" in AVAIL_LLM_MODELS:
431
  })
432
  except:
433
  print(trimmed_format_exc())
434
- if "qwen" in AVAIL_LLM_MODELS:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
435
  try:
436
  from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
437
  from .bridge_qwen import predict as qwen_ui
438
  model_info.update({
439
- "qwen": {
440
  "fn_with_ui": qwen_ui,
441
  "fn_without_ui": qwen_noui,
442
  "endpoint": None,
443
- "max_token": 4096,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
444
  "tokenizer": tokenizer_gpt35,
445
  "token_cnt": get_token_num_gpt35,
446
  }
@@ -552,7 +584,7 @@ if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
552
  "fn_with_ui": deepseekcoder_ui,
553
  "fn_without_ui": deepseekcoder_noui,
554
  "endpoint": None,
555
- "max_token": 4096,
556
  "tokenizer": tokenizer_gpt35,
557
  "token_cnt": get_token_num_gpt35,
558
  }
 
431
  })
432
  except:
433
  print(trimmed_format_exc())
434
+ if "qwen-local" in AVAIL_LLM_MODELS:
435
+ try:
436
+ from .bridge_qwen_local import predict_no_ui_long_connection as qwen_local_noui
437
+ from .bridge_qwen_local import predict as qwen_local_ui
438
+ model_info.update({
439
+ "qwen-local": {
440
+ "fn_with_ui": qwen_local_ui,
441
+ "fn_without_ui": qwen_local_noui,
442
+ "endpoint": None,
443
+ "max_token": 4096,
444
+ "tokenizer": tokenizer_gpt35,
445
+ "token_cnt": get_token_num_gpt35,
446
+ }
447
+ })
448
+ except:
449
+ print(trimmed_format_exc())
450
+ if "qwen-turbo" in AVAIL_LLM_MODELS or "qwen-plus" in AVAIL_LLM_MODELS or "qwen-max" in AVAIL_LLM_MODELS: # zhipuai
451
  try:
452
  from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
453
  from .bridge_qwen import predict as qwen_ui
454
  model_info.update({
455
+ "qwen-turbo": {
456
  "fn_with_ui": qwen_ui,
457
  "fn_without_ui": qwen_noui,
458
  "endpoint": None,
459
+ "max_token": 6144,
460
+ "tokenizer": tokenizer_gpt35,
461
+ "token_cnt": get_token_num_gpt35,
462
+ },
463
+ "qwen-plus": {
464
+ "fn_with_ui": qwen_ui,
465
+ "fn_without_ui": qwen_noui,
466
+ "endpoint": None,
467
+ "max_token": 30720,
468
+ "tokenizer": tokenizer_gpt35,
469
+ "token_cnt": get_token_num_gpt35,
470
+ },
471
+ "qwen-max": {
472
+ "fn_with_ui": qwen_ui,
473
+ "fn_without_ui": qwen_noui,
474
+ "endpoint": None,
475
+ "max_token": 28672,
476
  "tokenizer": tokenizer_gpt35,
477
  "token_cnt": get_token_num_gpt35,
478
  }
 
584
  "fn_with_ui": deepseekcoder_ui,
585
  "fn_without_ui": deepseekcoder_noui,
586
  "endpoint": None,
587
+ "max_token": 2048,
588
  "tokenizer": tokenizer_gpt35,
589
  "token_cnt": get_token_num_gpt35,
590
  }
request_llms/bridge_chatgpt.py CHANGED
@@ -51,7 +51,8 @@ def decode_chunk(chunk):
51
  chunkjson = json.loads(chunk_decoded[6:])
52
  has_choices = 'choices' in chunkjson
53
  if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
54
- if has_choices and choice_valid: has_content = "content" in chunkjson['choices'][0]["delta"]
 
55
  if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
56
  except:
57
  pass
@@ -101,20 +102,25 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
101
  result = ''
102
  json_data = None
103
  while True:
104
- try: chunk = next(stream_response).decode()
105
  except StopIteration:
106
  break
107
  except requests.exceptions.ConnectionError:
108
- chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。
109
- if len(chunk)==0: continue
110
- if not chunk.startswith('data:'):
111
- error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode()
 
112
  if "reduce the length" in error_msg:
113
  raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
114
  else:
115
  raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
116
- if ('data: [DONE]' in chunk): break # api2d 正常完成
117
- json_data = json.loads(chunk.lstrip('data:'))['choices'][0]
 
 
 
 
118
  delta = json_data["delta"]
119
  if len(delta) == 0: break
120
  if "role" in delta: continue
 
51
  chunkjson = json.loads(chunk_decoded[6:])
52
  has_choices = 'choices' in chunkjson
53
  if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
54
+ if has_choices and choice_valid: has_content = ("content" in chunkjson['choices'][0]["delta"])
55
+ if has_content: has_content = (chunkjson['choices'][0]["delta"]["content"] is not None)
56
  if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
57
  except:
58
  pass
 
102
  result = ''
103
  json_data = None
104
  while True:
105
+ try: chunk = next(stream_response)
106
  except StopIteration:
107
  break
108
  except requests.exceptions.ConnectionError:
109
+ chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。
110
+ chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
111
+ if len(chunk_decoded)==0: continue
112
+ if not chunk_decoded.startswith('data:'):
113
+ error_msg = get_full_error(chunk, stream_response).decode()
114
  if "reduce the length" in error_msg:
115
  raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
116
  else:
117
  raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
118
+ if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
119
+ # 提前读取一些信息 (用于判断异常)
120
+ if has_choices and not choice_valid:
121
+ # 一些垃圾第三方接口的出现这样的错误
122
+ continue
123
+ json_data = chunkjson['choices'][0]
124
  delta = json_data["delta"]
125
  if len(delta) == 0: break
126
  if "role" in delta: continue
request_llms/bridge_chatgpt_vision.py CHANGED
@@ -15,29 +15,16 @@ import requests
15
  import base64
16
  import os
17
  import glob
 
 
 
18
 
19
- from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder, update_ui_lastest_msg, get_max_token
20
  proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
21
  get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
22
 
23
  timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
24
  '网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
25
 
26
- def have_any_recent_upload_image_files(chatbot):
27
- _5min = 5 * 60
28
- if chatbot is None: return False, None # chatbot is None
29
- most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
30
- if not most_recent_uploaded: return False, None # most_recent_uploaded is None
31
- if time.time() - most_recent_uploaded["time"] < _5min:
32
- most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
33
- path = most_recent_uploaded['path']
34
- file_manifest = [f for f in glob.glob(f'{path}/**/*.jpg', recursive=True)]
35
- file_manifest += [f for f in glob.glob(f'{path}/**/*.jpeg', recursive=True)]
36
- file_manifest += [f for f in glob.glob(f'{path}/**/*.png', recursive=True)]
37
- if len(file_manifest) == 0: return False, None
38
- return True, file_manifest # most_recent_uploaded is new
39
- else:
40
- return False, None # most_recent_uploaded is too old
41
 
42
  def report_invalid_key(key):
43
  if get_conf("BLOCK_INVALID_APIKEY"):
@@ -258,10 +245,6 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg,
258
  chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
259
  return chatbot, history
260
 
261
- # Function to encode the image
262
- def encode_image(image_path):
263
- with open(image_path, "rb") as image_file:
264
- return base64.b64encode(image_file.read()).decode('utf-8')
265
 
266
  def generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths):
267
  """
 
15
  import base64
16
  import os
17
  import glob
18
+ from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder, \
19
+ update_ui_lastest_msg, get_max_token, encode_image, have_any_recent_upload_image_files
20
+
21
 
 
22
  proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
23
  get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
24
 
25
  timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
26
  '网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  def report_invalid_key(key):
30
  if get_conf("BLOCK_INVALID_APIKEY"):
 
245
  chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
246
  return chatbot, history
247
 
 
 
 
 
248
 
249
  def generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths):
250
  """
request_llms/bridge_deepseekcoder.py CHANGED
@@ -6,6 +6,7 @@ from toolbox import ProxyNetworkActivate
6
  from toolbox import get_conf
7
  from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
8
  from threading import Thread
 
9
 
10
  def download_huggingface_model(model_name, max_retry, local_dir):
11
  from huggingface_hub import snapshot_download
@@ -36,9 +37,46 @@ class GetCoderLMHandle(LocalLLMHandle):
36
  # tokenizer = download_huggingface_model(model_name, max_retry=128, local_dir=local_dir)
37
  tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
38
  self._streamer = TextIteratorStreamer(tokenizer)
39
- model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  if get_conf('LOCAL_MODEL_DEVICE') != 'cpu':
41
- model = model.cuda()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  return model, tokenizer
43
 
44
  def llm_stream_generator(self, **kwargs):
@@ -54,7 +92,10 @@ class GetCoderLMHandle(LocalLLMHandle):
54
  query, max_length, top_p, temperature, history = adaptor(kwargs)
55
  history.append({ 'role': 'user', 'content': query})
56
  messages = history
57
- inputs = self._tokenizer.apply_chat_template(messages, return_tensors="pt").to(self._model.device)
 
 
 
58
  generation_kwargs = dict(
59
  inputs=inputs,
60
  max_new_tokens=max_length,
 
6
  from toolbox import get_conf
7
  from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
8
  from threading import Thread
9
+ import torch
10
 
11
  def download_huggingface_model(model_name, max_retry, local_dir):
12
  from huggingface_hub import snapshot_download
 
37
  # tokenizer = download_huggingface_model(model_name, max_retry=128, local_dir=local_dir)
38
  tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
39
  self._streamer = TextIteratorStreamer(tokenizer)
40
+ device_map = {
41
+ "transformer.word_embeddings": 0,
42
+ "transformer.word_embeddings_layernorm": 0,
43
+ "lm_head": 0,
44
+ "transformer.h": 0,
45
+ "transformer.ln_f": 0,
46
+ "model.embed_tokens": 0,
47
+ "model.layers": 0,
48
+ "model.norm": 0,
49
+ }
50
+
51
+ # 检查量化配置
52
+ quantization_type = get_conf('LOCAL_MODEL_QUANT')
53
+
54
  if get_conf('LOCAL_MODEL_DEVICE') != 'cpu':
55
+ if quantization_type == "INT8":
56
+ from transformers import BitsAndBytesConfig
57
+ # 使用 INT8 量化
58
+ model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, load_in_8bit=True,
59
+ device_map=device_map)
60
+ elif quantization_type == "INT4":
61
+ from transformers import BitsAndBytesConfig
62
+ # 使用 INT4 量化
63
+ bnb_config = BitsAndBytesConfig(
64
+ load_in_4bit=True,
65
+ bnb_4bit_use_double_quant=True,
66
+ bnb_4bit_quant_type="nf4",
67
+ bnb_4bit_compute_dtype=torch.bfloat16
68
+ )
69
+ model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True,
70
+ quantization_config=bnb_config, device_map=device_map)
71
+ else:
72
+ # 使用默认的 FP16
73
+ model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True,
74
+ torch_dtype=torch.bfloat16, device_map=device_map)
75
+ else:
76
+ # CPU 模式
77
+ model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True,
78
+ torch_dtype=torch.bfloat16)
79
+
80
  return model, tokenizer
81
 
82
  def llm_stream_generator(self, **kwargs):
 
92
  query, max_length, top_p, temperature, history = adaptor(kwargs)
93
  history.append({ 'role': 'user', 'content': query})
94
  messages = history
95
+ inputs = self._tokenizer.apply_chat_template(messages, return_tensors="pt")
96
+ if inputs.shape[1] > max_length:
97
+ inputs = inputs[:, -max_length:]
98
+ inputs = inputs.to(self._model.device)
99
  generation_kwargs = dict(
100
  inputs=inputs,
101
  max_new_tokens=max_length,
request_llms/bridge_qwen.py CHANGED
@@ -1,67 +1,62 @@
1
- model_name = "Qwen"
2
- cmd_to_install = "`pip install -r request_llms/requirements_qwen.txt`"
3
-
4
-
5
- from transformers import AutoModel, AutoTokenizer
6
  import time
7
- import threading
8
- import importlib
9
- from toolbox import update_ui, get_conf, ProxyNetworkActivate
10
- from multiprocessing import Process, Pipe
11
- from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
12
-
13
-
14
-
15
- # ------------------------------------------------------------------------------------------------------------------------
16
- # 🔌💻 Local Model
17
- # ------------------------------------------------------------------------------------------------------------------------
18
- class GetQwenLMHandle(LocalLLMHandle):
19
-
20
- def load_model_info(self):
21
- # 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
22
- self.model_name = model_name
23
- self.cmd_to_install = cmd_to_install
24
-
25
- def load_model_and_tokenizer(self):
26
- # 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
27
- import os, glob
28
- import os
29
- import platform
30
- from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
31
-
32
- with ProxyNetworkActivate('Download_LLM'):
33
- model_id = 'qwen/Qwen-7B-Chat'
34
- self._tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen-7B-Chat', trust_remote_code=True, resume_download=True)
35
- # use fp16
36
- model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True, fp16=True).eval()
37
- model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
38
- self._model = model
39
-
40
- return self._model, self._tokenizer
41
-
42
- def llm_stream_generator(self, **kwargs):
43
- # 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
44
- def adaptor(kwargs):
45
- query = kwargs['query']
46
- max_length = kwargs['max_length']
47
- top_p = kwargs['top_p']
48
- temperature = kwargs['temperature']
49
- history = kwargs['history']
50
- return query, max_length, top_p, temperature, history
51
-
52
- query, max_length, top_p, temperature, history = adaptor(kwargs)
53
-
54
- for response in self._model.chat(self._tokenizer, query, history=history, stream=True):
55
- yield response
56
-
57
- def try_to_import_special_deps(self, **kwargs):
58
- # import something that will raise error if the user does not install requirement_*.txt
59
- # 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行
60
- import importlib
61
- importlib.import_module('modelscope')
62
-
63
-
64
- # ------------------------------------------------------------------------------------------------------------------------
65
- # 🔌💻 GPT-Academic Interface
66
- # ------------------------------------------------------------------------------------------------------------------------
67
- predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)
 
 
 
 
 
 
1
  import time
2
+ import os
3
+ from toolbox import update_ui, get_conf, update_ui_lastest_msg
4
+ from toolbox import check_packages, report_exception
5
+
6
+ model_name = 'Qwen'
7
+
8
+ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
9
+ """
10
+ ⭐多线程方法
11
+ 函数的说明请见 request_llms/bridge_all.py
12
+ """
13
+ watch_dog_patience = 5
14
+ response = ""
15
+
16
+ from .com_qwenapi import QwenRequestInstance
17
+ sri = QwenRequestInstance()
18
+ for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
19
+ if len(observe_window) >= 1:
20
+ observe_window[0] = response
21
+ if len(observe_window) >= 2:
22
+ if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
23
+ return response
24
+
25
+ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
26
+ """
27
+ ⭐单线程方法
28
+ 函数的说明请见 request_llms/bridge_all.py
29
+ """
30
+ chatbot.append((inputs, ""))
31
+ yield from update_ui(chatbot=chatbot, history=history)
32
+
33
+ # 尝试导入依赖,如果缺少依赖,则给出安装建议
34
+ try:
35
+ check_packages(["dashscope"])
36
+ except:
37
+ yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade dashscope```。",
38
+ chatbot=chatbot, history=history, delay=0)
39
+ return
40
+
41
+ # 检查DASHSCOPE_API_KEY
42
+ if get_conf("DASHSCOPE_API_KEY") == "":
43
+ yield from update_ui_lastest_msg(f"请配置 DASHSCOPE_API_KEY。",
44
+ chatbot=chatbot, history=history, delay=0)
45
+ return
46
+
47
+ if additional_fn is not None:
48
+ from core_functional import handle_core_functionality
49
+ inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
50
+
51
+ # 开始接收回复
52
+ from .com_qwenapi import QwenRequestInstance
53
+ sri = QwenRequestInstance()
54
+ for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
55
+ chatbot[-1] = (inputs, response)
56
+ yield from update_ui(chatbot=chatbot, history=history)
57
+
58
+ # 总结输出
59
+ if response == f"[Local Message] 等待{model_name}响应中 ...":
60
+ response = f"[Local Message] {model_name}响应异常 ..."
61
+ history.extend([inputs, response])
62
+ yield from update_ui(chatbot=chatbot, history=history)
request_llms/bridge_spark.py CHANGED
@@ -26,7 +26,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
26
 
27
  from .com_sparkapi import SparkRequestInstance
28
  sri = SparkRequestInstance()
29
- for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
30
  if len(observe_window) >= 1:
31
  observe_window[0] = response
32
  if len(observe_window) >= 2:
@@ -52,7 +52,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
52
  # 开始接收回复
53
  from .com_sparkapi import SparkRequestInstance
54
  sri = SparkRequestInstance()
55
- for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
56
  chatbot[-1] = (inputs, response)
57
  yield from update_ui(chatbot=chatbot, history=history)
58
 
 
26
 
27
  from .com_sparkapi import SparkRequestInstance
28
  sri = SparkRequestInstance()
29
+ for response in sri.generate(inputs, llm_kwargs, history, sys_prompt, use_image_api=False):
30
  if len(observe_window) >= 1:
31
  observe_window[0] = response
32
  if len(observe_window) >= 2:
 
52
  # 开始接收回复
53
  from .com_sparkapi import SparkRequestInstance
54
  sri = SparkRequestInstance()
55
+ for response in sri.generate(inputs, llm_kwargs, history, system_prompt, use_image_api=True):
56
  chatbot[-1] = (inputs, response)
57
  yield from update_ui(chatbot=chatbot, history=history)
58
 
request_llms/com_sparkapi.py CHANGED
@@ -1,4 +1,4 @@
1
- from toolbox import get_conf
2
  import base64
3
  import datetime
4
  import hashlib
@@ -65,18 +65,19 @@ class SparkRequestInstance():
65
  self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
66
  self.gpt_url_v2 = "ws://spark-api.xf-yun.com/v2.1/chat"
67
  self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat"
 
68
 
69
  self.time_to_yield_event = threading.Event()
70
  self.time_to_exit_event = threading.Event()
71
 
72
  self.result_buf = ""
73
 
74
- def generate(self, inputs, llm_kwargs, history, system_prompt):
75
  llm_kwargs = llm_kwargs
76
  history = history
77
  system_prompt = system_prompt
78
  import _thread as thread
79
- thread.start_new_thread(self.create_blocking_request, (inputs, llm_kwargs, history, system_prompt))
80
  while True:
81
  self.time_to_yield_event.wait(timeout=1)
82
  if self.time_to_yield_event.is_set():
@@ -85,14 +86,20 @@ class SparkRequestInstance():
85
  return self.result_buf
86
 
87
 
88
- def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt):
89
  if llm_kwargs['llm_model'] == 'sparkv2':
90
  gpt_url = self.gpt_url_v2
91
  elif llm_kwargs['llm_model'] == 'sparkv3':
92
  gpt_url = self.gpt_url_v3
93
  else:
94
  gpt_url = self.gpt_url
95
-
 
 
 
 
 
 
96
  wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, gpt_url)
97
  websocket.enableTrace(False)
98
  wsUrl = wsParam.create_url()
@@ -101,9 +108,8 @@ class SparkRequestInstance():
101
  def on_open(ws):
102
  import _thread as thread
103
  thread.start_new_thread(run, (ws,))
104
-
105
  def run(ws, *args):
106
- data = json.dumps(gen_params(ws.appid, *ws.all_args))
107
  ws.send(data)
108
 
109
  # 收到websocket消息的处理
@@ -142,9 +148,18 @@ class SparkRequestInstance():
142
  ws.all_args = (inputs, llm_kwargs, history, system_prompt)
143
  ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
144
 
145
- def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
146
  conversation_cnt = len(history) // 2
147
- messages = [{"role": "system", "content": system_prompt}]
 
 
 
 
 
 
 
 
 
148
  if conversation_cnt:
149
  for index in range(0, 2*conversation_cnt, 2):
150
  what_i_have_asked = {}
@@ -167,7 +182,7 @@ def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
167
  return messages
168
 
169
 
170
- def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
171
  """
172
  通过appid和用户的提问来生成请参数
173
  """
@@ -176,6 +191,8 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
176
  "sparkv2": "generalv2",
177
  "sparkv3": "generalv3",
178
  }
 
 
179
  data = {
180
  "header": {
181
  "app_id": appid,
@@ -183,7 +200,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
183
  },
184
  "parameter": {
185
  "chat": {
186
- "domain": domains[llm_kwargs['llm_model']],
187
  "temperature": llm_kwargs["temperature"],
188
  "random_threshold": 0.5,
189
  "max_tokens": 4096,
@@ -192,7 +209,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
192
  },
193
  "payload": {
194
  "message": {
195
- "text": generate_message_payload(inputs, llm_kwargs, history, system_prompt)
196
  }
197
  }
198
  }
 
1
+ from toolbox import get_conf, get_pictures_list, encode_image
2
  import base64
3
  import datetime
4
  import hashlib
 
65
  self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
66
  self.gpt_url_v2 = "ws://spark-api.xf-yun.com/v2.1/chat"
67
  self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat"
68
+ self.gpt_url_img = "wss://spark-api.cn-huabei-1.xf-yun.com/v2.1/image"
69
 
70
  self.time_to_yield_event = threading.Event()
71
  self.time_to_exit_event = threading.Event()
72
 
73
  self.result_buf = ""
74
 
75
+ def generate(self, inputs, llm_kwargs, history, system_prompt, use_image_api=False):
76
  llm_kwargs = llm_kwargs
77
  history = history
78
  system_prompt = system_prompt
79
  import _thread as thread
80
+ thread.start_new_thread(self.create_blocking_request, (inputs, llm_kwargs, history, system_prompt, use_image_api))
81
  while True:
82
  self.time_to_yield_event.wait(timeout=1)
83
  if self.time_to_yield_event.is_set():
 
86
  return self.result_buf
87
 
88
 
89
+ def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt, use_image_api):
90
  if llm_kwargs['llm_model'] == 'sparkv2':
91
  gpt_url = self.gpt_url_v2
92
  elif llm_kwargs['llm_model'] == 'sparkv3':
93
  gpt_url = self.gpt_url_v3
94
  else:
95
  gpt_url = self.gpt_url
96
+ file_manifest = []
97
+ if use_image_api and llm_kwargs.get('most_recent_uploaded'):
98
+ if llm_kwargs['most_recent_uploaded'].get('path'):
99
+ file_manifest = get_pictures_list(llm_kwargs['most_recent_uploaded']['path'])
100
+ if len(file_manifest) > 0:
101
+ print('正在使用讯飞图片理解API')
102
+ gpt_url = self.gpt_url_img
103
  wsParam = Ws_Param(self.appid, self.api_key, self.api_secret, gpt_url)
104
  websocket.enableTrace(False)
105
  wsUrl = wsParam.create_url()
 
108
  def on_open(ws):
109
  import _thread as thread
110
  thread.start_new_thread(run, (ws,))
 
111
  def run(ws, *args):
112
+ data = json.dumps(gen_params(ws.appid, *ws.all_args, file_manifest))
113
  ws.send(data)
114
 
115
  # 收到websocket消息的处理
 
148
  ws.all_args = (inputs, llm_kwargs, history, system_prompt)
149
  ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
150
 
151
+ def generate_message_payload(inputs, llm_kwargs, history, system_prompt, file_manifest):
152
  conversation_cnt = len(history) // 2
153
+ messages = []
154
+ if file_manifest:
155
+ base64_images = []
156
+ for image_path in file_manifest:
157
+ base64_images.append(encode_image(image_path))
158
+ for img_s in base64_images:
159
+ if img_s not in str(messages):
160
+ messages.append({"role": "user", "content": img_s, "content_type": "image"})
161
+ else:
162
+ messages = [{"role": "system", "content": system_prompt}]
163
  if conversation_cnt:
164
  for index in range(0, 2*conversation_cnt, 2):
165
  what_i_have_asked = {}
 
182
  return messages
183
 
184
 
185
+ def gen_params(appid, inputs, llm_kwargs, history, system_prompt, file_manifest):
186
  """
187
  通过appid和用户的提问来生成请参数
188
  """
 
191
  "sparkv2": "generalv2",
192
  "sparkv3": "generalv3",
193
  }
194
+ domains_select = domains[llm_kwargs['llm_model']]
195
+ if file_manifest: domains_select = 'image'
196
  data = {
197
  "header": {
198
  "app_id": appid,
 
200
  },
201
  "parameter": {
202
  "chat": {
203
+ "domain": domains_select,
204
  "temperature": llm_kwargs["temperature"],
205
  "random_threshold": 0.5,
206
  "max_tokens": 4096,
 
209
  },
210
  "payload": {
211
  "message": {
212
+ "text": generate_message_payload(inputs, llm_kwargs, history, system_prompt, file_manifest)
213
  }
214
  }
215
  }
request_llms/local_llm_class.py CHANGED
@@ -183,11 +183,11 @@ class LocalLLMHandle(Process):
183
  def stream_chat(self, **kwargs):
184
  # ⭐run in main process
185
  if self.get_state() == "`准备就绪`":
186
- yield "`正在等待线程锁,排队中请稍后 ...`"
187
 
188
  with self.threadLock:
189
  if self.parent.poll():
190
- yield "`排队中请稍后 ...`"
191
  self.clear_pending_messages()
192
  self.parent.send(kwargs)
193
  std_out = ""
 
183
  def stream_chat(self, **kwargs):
184
  # ⭐run in main process
185
  if self.get_state() == "`准备就绪`":
186
+ yield "`正在等待线程锁,排队中请稍候 ...`"
187
 
188
  with self.threadLock:
189
  if self.parent.poll():
190
+ yield "`排队中请稍候 ...`"
191
  self.clear_pending_messages()
192
  self.parent.send(kwargs)
193
  std_out = ""
request_llms/requirements_chatglm_onnx.txt CHANGED
@@ -6,5 +6,3 @@ sentencepiece
6
  numpy
7
  onnxruntime
8
  sentencepiece
9
- streamlit
10
- streamlit-chat
 
6
  numpy
7
  onnxruntime
8
  sentencepiece
 
 
request_llms/requirements_moss.txt CHANGED
@@ -5,5 +5,4 @@ accelerate
5
  matplotlib
6
  huggingface_hub
7
  triton
8
- streamlit
9
 
 
5
  matplotlib
6
  huggingface_hub
7
  triton
 
8
 
request_llms/requirements_qwen.txt CHANGED
@@ -1,2 +1 @@
1
- modelscope
2
- transformers_stream_generator
 
1
+ dashscope
 
requirements.txt CHANGED
@@ -2,6 +2,7 @@ pydantic==1.10.11
2
  pypdf2==2.12.1
3
  tiktoken>=0.3.3
4
  requests[socks]
 
5
  transformers>=4.27.1
6
  scipdf_parser>=0.52
7
  python-markdown-math
 
2
  pypdf2==2.12.1
3
  tiktoken>=0.3.3
4
  requests[socks]
5
+ protobuf==3.18
6
  transformers>=4.27.1
7
  scipdf_parser>=0.52
8
  python-markdown-math
tests/test_llms.py CHANGED
@@ -16,8 +16,9 @@ if __name__ == "__main__":
16
  # from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
17
  # from request_llms.bridge_claude import predict_no_ui_long_connection
18
  # from request_llms.bridge_internlm import predict_no_ui_long_connection
19
- from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
20
- # from request_llms.bridge_qwen import predict_no_ui_long_connection
 
21
  # from request_llms.bridge_spark import predict_no_ui_long_connection
22
  # from request_llms.bridge_zhipu import predict_no_ui_long_connection
23
  # from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
 
16
  # from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
17
  # from request_llms.bridge_claude import predict_no_ui_long_connection
18
  # from request_llms.bridge_internlm import predict_no_ui_long_connection
19
+ # from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
20
+ # from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
21
+ from request_llms.bridge_qwen_local import predict_no_ui_long_connection
22
  # from request_llms.bridge_spark import predict_no_ui_long_connection
23
  # from request_llms.bridge_zhipu import predict_no_ui_long_connection
24
  # from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
tests/test_plugins.py CHANGED
@@ -48,11 +48,11 @@ if __name__ == "__main__":
48
  # for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]:
49
  # plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown翻译指定语言', main_input="README.md", advanced_arg={"advanced_arg": lang})
50
 
51
- # plugin_test(plugin='crazy_functions.Langchain知识库->知识库问答', main_input="./")
52
 
53
- # plugin_test(plugin='crazy_functions.Langchain知识库->读取知识库作答', main_input="What is the installation method?")
54
 
55
- # plugin_test(plugin='crazy_functions.Langchain知识库->读取知识库作答', main_input="远程云服务器部署?")
56
 
57
  # plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629")
58
 
 
48
  # for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]:
49
  # plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown翻译指定语言', main_input="README.md", advanced_arg={"advanced_arg": lang})
50
 
51
+ # plugin_test(plugin='crazy_functions.知识库文件注入->知识库文件注入', main_input="./")
52
 
53
+ # plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="What is the installation method?")
54
 
55
+ # plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
56
 
57
  # plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629")
58
 
tests/test_utils.py CHANGED
@@ -56,11 +56,11 @@ vt.get_plugin_handle = silence_stdout_fn(get_plugin_handle)
56
  vt.get_plugin_default_kwargs = silence_stdout_fn(get_plugin_default_kwargs)
57
  vt.get_chat_handle = silence_stdout_fn(get_chat_handle)
58
  vt.get_chat_default_kwargs = silence_stdout_fn(get_chat_default_kwargs)
59
- vt.chat_to_markdown_str = chat_to_markdown_str
60
  proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
61
  vt.get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
62
 
63
- def plugin_test(main_input, plugin, advanced_arg=None):
64
  from rich.live import Live
65
  from rich.markdown import Markdown
66
 
@@ -72,7 +72,10 @@ def plugin_test(main_input, plugin, advanced_arg=None):
72
  plugin_kwargs['main_input'] = main_input
73
  if advanced_arg is not None:
74
  plugin_kwargs['plugin_kwargs'] = advanced_arg
75
- my_working_plugin = silence_stdout(plugin)(**plugin_kwargs)
 
 
 
76
 
77
  with Live(Markdown(""), auto_refresh=False, vertical_overflow="visible") as live:
78
  for cookies, chat, hist, msg in my_working_plugin:
 
56
  vt.get_plugin_default_kwargs = silence_stdout_fn(get_plugin_default_kwargs)
57
  vt.get_chat_handle = silence_stdout_fn(get_chat_handle)
58
  vt.get_chat_default_kwargs = silence_stdout_fn(get_chat_default_kwargs)
59
+ vt.chat_to_markdown_str = (chat_to_markdown_str)
60
  proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
61
  vt.get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
62
 
63
+ def plugin_test(main_input, plugin, advanced_arg=None, debug=True):
64
  from rich.live import Live
65
  from rich.markdown import Markdown
66
 
 
72
  plugin_kwargs['main_input'] = main_input
73
  if advanced_arg is not None:
74
  plugin_kwargs['plugin_kwargs'] = advanced_arg
75
+ if debug:
76
+ my_working_plugin = (plugin)(**plugin_kwargs)
77
+ else:
78
+ my_working_plugin = silence_stdout(plugin)(**plugin_kwargs)
79
 
80
  with Live(Markdown(""), auto_refresh=False, vertical_overflow="visible") as live:
81
  for cookies, chat, hist, msg in my_working_plugin:
themes/common.js CHANGED
@@ -1,9 +1,13 @@
 
 
 
 
1
  function gradioApp() {
2
  // https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
3
  const elems = document.getElementsByTagName('gradio-app');
4
  const elem = elems.length == 0 ? document : elems[0];
5
  if (elem !== document) {
6
- elem.getElementById = function(id) {
7
  return document.getElementById(id);
8
  };
9
  }
@@ -12,31 +16,76 @@ function gradioApp() {
12
 
13
  function setCookie(name, value, days) {
14
  var expires = "";
15
-
16
  if (days) {
17
- var date = new Date();
18
- date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000));
19
- expires = "; expires=" + date.toUTCString();
20
  }
21
-
22
  document.cookie = name + "=" + value + expires + "; path=/";
23
  }
24
 
25
  function getCookie(name) {
26
  var decodedCookie = decodeURIComponent(document.cookie);
27
  var cookies = decodedCookie.split(';');
28
-
29
  for (var i = 0; i < cookies.length; i++) {
30
- var cookie = cookies[i].trim();
31
-
32
- if (cookie.indexOf(name + "=") === 0) {
33
- return cookie.substring(name.length + 1, cookie.length);
34
- }
35
  }
36
-
37
  return null;
38
- }
39
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  function addCopyButton(botElement) {
41
  // https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
42
  // Copy bot button
@@ -49,7 +98,7 @@ function addCopyButton(botElement) {
49
  // messageBtnColumnElement.remove();
50
  return;
51
  }
52
-
53
  var copyButton = document.createElement('button');
54
  copyButton.classList.add('copy-bot-btn');
55
  copyButton.setAttribute('aria-label', 'Copy');
@@ -98,47 +147,61 @@ function chatbotContentChanged(attempt = 1, force = false) {
98
  }
99
  }
100
 
101
- function chatbotAutoHeight(){
 
 
 
 
 
 
102
  // 自动调整高度
103
- function update_height(){
104
- var { panel_height_target, chatbot_height, chatbot } = get_elements(true);
105
- if (panel_height_target!=chatbot_height)
106
- {
107
- var pixelString = panel_height_target.toString() + 'px';
108
- chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
109
  }
110
  }
111
 
112
- function update_height_slow(){
113
- var { panel_height_target, chatbot_height, chatbot } = get_elements();
114
- if (panel_height_target!=chatbot_height)
115
- {
116
- new_panel_height = (panel_height_target - chatbot_height)*0.5 + chatbot_height;
117
- if (Math.abs(new_panel_height - panel_height_target) < 10){
118
- new_panel_height = panel_height_target;
119
  }
120
- // console.log(chatbot_height, panel_height_target, new_panel_height);
121
  var pixelString = new_panel_height.toString() + 'px';
122
- chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
123
  }
124
  }
125
-
126
  update_height();
127
- setInterval(function() {
128
  update_height_slow()
129
- }, 50); // 每100毫秒执行一次
130
  }
131
 
132
- function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
133
- chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
134
- var chatbotObserver = new MutationObserver(() => {
135
- chatbotContentChanged(1);
136
- });
137
- chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
138
- if (LAYOUT === "LEFT-RIGHT") {chatbotAutoHeight();}
 
 
 
 
 
 
 
 
 
 
139
  }
140
 
141
- function get_elements(consider_state_panel=false) {
142
  var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq');
143
  if (!chatbot) {
144
  chatbot = document.querySelector('#gpt-chatbot');
@@ -147,17 +210,292 @@ function get_elements(consider_state_panel=false) {
147
  const panel2 = document.querySelector('#basic-panel').getBoundingClientRect()
148
  const panel3 = document.querySelector('#plugin-panel').getBoundingClientRect();
149
  // const panel4 = document.querySelector('#interact-panel').getBoundingClientRect();
150
- const panel5 = document.querySelector('#input-panel2').getBoundingClientRect();
151
  const panel_active = document.querySelector('#state-panel').getBoundingClientRect();
152
- if (consider_state_panel || panel_active.height < 25){
153
  document.state_panel_height = panel_active.height;
154
  }
155
  // 25 是chatbot的label高度, 16 是右侧的gap
156
- var panel_height_target = panel1.height + panel2.height + panel3.height + 0 + 0 - 25 + 16*2;
157
  // 禁止动态的state-panel高度影响
158
- panel_height_target = panel_height_target + (document.state_panel_height-panel_active.height)
159
- var panel_height_target = parseInt(panel_height_target);
160
  var chatbot_height = chatbot.style.height;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
  var chatbot_height = parseInt(chatbot_height);
162
- return { panel_height_target, chatbot_height, chatbot };
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
163
  }
 
1
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
2
+ // 第 1 部分: 工具函数
3
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
4
+
5
  function gradioApp() {
6
  // https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
7
  const elems = document.getElementsByTagName('gradio-app');
8
  const elem = elems.length == 0 ? document : elems[0];
9
  if (elem !== document) {
10
+ elem.getElementById = function (id) {
11
  return document.getElementById(id);
12
  };
13
  }
 
16
 
17
  function setCookie(name, value, days) {
18
  var expires = "";
19
+
20
  if (days) {
21
+ var date = new Date();
22
+ date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000));
23
+ expires = "; expires=" + date.toUTCString();
24
  }
25
+
26
  document.cookie = name + "=" + value + expires + "; path=/";
27
  }
28
 
29
  function getCookie(name) {
30
  var decodedCookie = decodeURIComponent(document.cookie);
31
  var cookies = decodedCookie.split(';');
32
+
33
  for (var i = 0; i < cookies.length; i++) {
34
+ var cookie = cookies[i].trim();
35
+
36
+ if (cookie.indexOf(name + "=") === 0) {
37
+ return cookie.substring(name.length + 1, cookie.length);
38
+ }
39
  }
40
+
41
  return null;
42
+ }
43
+
44
+ let toastCount = 0;
45
+ function toast_push(msg, duration) {
46
+ duration = isNaN(duration) ? 3000 : duration;
47
+ const existingToasts = document.querySelectorAll('.toast');
48
+ existingToasts.forEach(toast => {
49
+ toast.style.top = `${parseInt(toast.style.top, 10) - 70}px`;
50
+ });
51
+ const m = document.createElement('div');
52
+ m.innerHTML = msg;
53
+ m.classList.add('toast');
54
+ m.style.cssText = `font-size: var(--text-md) !important; color: rgb(255, 255, 255); background-color: rgba(0, 0, 0, 0.6); padding: 10px 15px; border-radius: 4px; position: fixed; top: ${50 + toastCount * 70}%; left: 50%; transform: translateX(-50%); width: auto; text-align: center; transition: top 0.3s;`;
55
+ document.body.appendChild(m);
56
+ setTimeout(function () {
57
+ m.style.opacity = '0';
58
+ setTimeout(function () {
59
+ document.body.removeChild(m);
60
+ toastCount--;
61
+ }, 500);
62
+ }, duration);
63
+ toastCount++;
64
+ }
65
+
66
+ function toast_up(msg) {
67
+ var m = document.getElementById('toast_up');
68
+ if (m) {
69
+ document.body.removeChild(m); // remove the loader from the body
70
+ }
71
+ m = document.createElement('div');
72
+ m.id = 'toast_up';
73
+ m.innerHTML = msg;
74
+ m.style.cssText = "font-size: var(--text-md) !important; color: rgb(255, 255, 255); background-color: rgba(0, 0, 100, 0.6); padding: 10px 15px; margin: 0 0 0 -60px; border-radius: 4px; position: fixed; top: 50%; left: 50%; width: auto; text-align: center;";
75
+ document.body.appendChild(m);
76
+ }
77
+ function toast_down() {
78
+ var m = document.getElementById('toast_up');
79
+ if (m) {
80
+ document.body.removeChild(m); // remove the loader from the body
81
+ }
82
+ }
83
+
84
+
85
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
86
+ // 第 2 部分: 复制按钮
87
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
88
+
89
  function addCopyButton(botElement) {
90
  // https://github.com/GaiZhenbiao/ChuanhuChatGPT/tree/main/web_assets/javascript
91
  // Copy bot button
 
98
  // messageBtnColumnElement.remove();
99
  return;
100
  }
101
+
102
  var copyButton = document.createElement('button');
103
  copyButton.classList.add('copy-bot-btn');
104
  copyButton.setAttribute('aria-label', 'Copy');
 
147
  }
148
  }
149
 
150
+
151
+
152
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
153
+ // 第 3 部分: chatbot动态高度调整
154
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
155
+
156
+ function chatbotAutoHeight() {
157
  // 自动调整高度
158
+ function update_height() {
159
+ var { height_target, chatbot_height, chatbot } = get_elements(true);
160
+ if (height_target != chatbot_height) {
161
+ var pixelString = height_target.toString() + 'px';
162
+ chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
 
163
  }
164
  }
165
 
166
+ function update_height_slow() {
167
+ var { height_target, chatbot_height, chatbot } = get_elements();
168
+ if (height_target != chatbot_height) {
169
+ new_panel_height = (height_target - chatbot_height) * 0.5 + chatbot_height;
170
+ if (Math.abs(new_panel_height - height_target) < 10) {
171
+ new_panel_height = height_target;
 
172
  }
173
+ // console.log(chatbot_height, height_target, new_panel_height);
174
  var pixelString = new_panel_height.toString() + 'px';
175
+ chatbot.style.maxHeight = pixelString; chatbot.style.height = pixelString;
176
  }
177
  }
178
+ monitoring_input_box()
179
  update_height();
180
+ setInterval(function () {
181
  update_height_slow()
182
+ }, 50); // 每50毫秒执行一次
183
  }
184
 
185
+ swapped = false;
186
+ function swap_input_area() {
187
+ // Get the elements to be swapped
188
+ var element1 = document.querySelector("#input-panel");
189
+ var element2 = document.querySelector("#basic-panel");
190
+
191
+ // Get the parent of the elements
192
+ var parent = element1.parentNode;
193
+
194
+ // Get the next sibling of element2
195
+ var nextSibling = element2.nextSibling;
196
+
197
+ // Swap the elements
198
+ parent.insertBefore(element2, element1);
199
+ parent.insertBefore(element1, nextSibling);
200
+ if (swapped) {swapped = false;}
201
+ else {swapped = true;}
202
  }
203
 
204
+ function get_elements(consider_state_panel = false) {
205
  var chatbot = document.querySelector('#gpt-chatbot > div.wrap.svelte-18telvq');
206
  if (!chatbot) {
207
  chatbot = document.querySelector('#gpt-chatbot');
 
210
  const panel2 = document.querySelector('#basic-panel').getBoundingClientRect()
211
  const panel3 = document.querySelector('#plugin-panel').getBoundingClientRect();
212
  // const panel4 = document.querySelector('#interact-panel').getBoundingClientRect();
 
213
  const panel_active = document.querySelector('#state-panel').getBoundingClientRect();
214
+ if (consider_state_panel || panel_active.height < 25) {
215
  document.state_panel_height = panel_active.height;
216
  }
217
  // 25 是chatbot的label高度, 16 是右侧的gap
218
+ var height_target = panel1.height + panel2.height + panel3.height + 0 + 0 - 25 + 16 * 2;
219
  // 禁止动态的state-panel高度影响
220
+ height_target = height_target + (document.state_panel_height - panel_active.height)
221
+ var height_target = parseInt(height_target);
222
  var chatbot_height = chatbot.style.height;
223
+ // 交换输入区位置,使得输入区始终可用
224
+ if (!swapped){
225
+ if (panel1.top!=0 && panel1.top < 0){ swap_input_area(); }
226
+ }
227
+ else if (swapped){
228
+ if (panel2.top!=0 && panel2.top > 0){ swap_input_area(); }
229
+ }
230
+ // 调整高度
231
+ const err_tor = 5;
232
+ if (Math.abs(panel1.left - chatbot.getBoundingClientRect().left) < err_tor){
233
+ // 是否处于窄屏模式
234
+ height_target = window.innerHeight * 0.6;
235
+ }else{
236
+ // 调整高度
237
+ const chatbot_height_exceed = 15;
238
+ const chatbot_height_exceed_m = 10;
239
+ b_panel = Math.max(panel1.bottom, panel2.bottom, panel3.bottom)
240
+ if (b_panel >= window.innerHeight - chatbot_height_exceed) {
241
+ height_target = window.innerHeight - chatbot.getBoundingClientRect().top - chatbot_height_exceed_m;
242
+ }
243
+ else if (b_panel < window.innerHeight * 0.75) {
244
+ height_target = window.innerHeight * 0.8;
245
+ }
246
+ }
247
  var chatbot_height = parseInt(chatbot_height);
248
+ return { height_target, chatbot_height, chatbot };
249
+ }
250
+
251
+
252
+
253
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
254
+ // 第 4 部分: 粘贴、拖拽文件上传
255
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
256
+
257
+ var elem_upload = null;
258
+ var elem_upload_float = null;
259
+ var elem_input_main = null;
260
+ var elem_input_float = null;
261
+ var elem_chatbot = null;
262
+ var exist_file_msg = '⚠️请先删除上传区(左上方)中的历史文件,再尝试上传。'
263
+
264
+ function add_func_paste(input) {
265
+ let paste_files = [];
266
+ if (input) {
267
+ input.addEventListener("paste", async function (e) {
268
+ const clipboardData = e.clipboardData || window.clipboardData;
269
+ const items = clipboardData.items;
270
+ if (items) {
271
+ for (i = 0; i < items.length; i++) {
272
+ if (items[i].kind === "file") { // 确保是文件类型
273
+ const file = items[i].getAsFile();
274
+ // 将每一个粘贴的文件添加到files数组中
275
+ paste_files.push(file);
276
+ e.preventDefault(); // 避免粘贴文件名到输入框
277
+ }
278
+ }
279
+ if (paste_files.length > 0) {
280
+ // 按照文件列表执行批量上传逻辑
281
+ await upload_files(paste_files);
282
+ paste_files = []
283
+
284
+ }
285
+ }
286
+ });
287
+ }
288
+ }
289
+
290
+ function add_func_drag(elem) {
291
+ if (elem) {
292
+ const dragEvents = ["dragover"];
293
+ const leaveEvents = ["dragleave", "dragend", "drop"];
294
+
295
+ const onDrag = function (e) {
296
+ e.preventDefault();
297
+ e.stopPropagation();
298
+ if (elem_upload_float.querySelector("input[type=file]")) {
299
+ toast_up('⚠️释放以上传文件')
300
+ } else {
301
+ toast_up(exist_file_msg)
302
+ }
303
+ };
304
+
305
+ const onLeave = function (e) {
306
+ toast_down();
307
+ e.preventDefault();
308
+ e.stopPropagation();
309
+ };
310
+
311
+ dragEvents.forEach(event => {
312
+ elem.addEventListener(event, onDrag);
313
+ });
314
+
315
+ leaveEvents.forEach(event => {
316
+ elem.addEventListener(event, onLeave);
317
+ });
318
+
319
+ elem.addEventListener("drop", async function (e) {
320
+ const files = e.dataTransfer.files;
321
+ await upload_files(files);
322
+ });
323
+ }
324
+ }
325
+
326
+ async function upload_files(files) {
327
+ const uploadInputElement = elem_upload_float.querySelector("input[type=file]");
328
+ let totalSizeMb = 0
329
+ if (files && files.length > 0) {
330
+ // 执行具体的上传逻辑
331
+ if (uploadInputElement) {
332
+ for (let i = 0; i < files.length; i++) {
333
+ // 将从文件数组中获取的文件大小(单位为字节)转换为MB,
334
+ totalSizeMb += files[i].size / 1024 / 1024;
335
+ }
336
+ // 检查文件总大小是否超过20MB
337
+ if (totalSizeMb > 20) {
338
+ toast_push('⚠️文件夹大于 20MB 🚀上传文件中', 3000)
339
+ // return; // 如果超过了指定大小, 可以不进行后续上传操作
340
+ }
341
+ // 监听change事件, 原生Gradio可以实现
342
+ // uploadInputElement.addEventListener('change', function(){replace_input_string()});
343
+ let event = new Event("change");
344
+ Object.defineProperty(event, "target", { value: uploadInputElement, enumerable: true });
345
+ Object.defineProperty(event, "currentTarget", { value: uploadInputElement, enumerable: true });
346
+ Object.defineProperty(uploadInputElement, "files", { value: files, enumerable: true });
347
+ uploadInputElement.dispatchEvent(event);
348
+ } else {
349
+ toast_push(exist_file_msg, 3000)
350
+ }
351
+ }
352
+ }
353
+
354
+ function begin_loading_status() {
355
+ // Create the loader div and add styling
356
+ var loader = document.createElement('div');
357
+ loader.id = 'Js_File_Loading';
358
+ loader.style.position = "absolute";
359
+ loader.style.top = "50%";
360
+ loader.style.left = "50%";
361
+ loader.style.width = "60px";
362
+ loader.style.height = "60px";
363
+ loader.style.border = "16px solid #f3f3f3";
364
+ loader.style.borderTop = "16px solid #3498db";
365
+ loader.style.borderRadius = "50%";
366
+ loader.style.animation = "spin 2s linear infinite";
367
+ loader.style.transform = "translate(-50%, -50%)";
368
+ document.body.appendChild(loader); // Add the loader to the body
369
+ // Set the CSS animation keyframes
370
+ var styleSheet = document.createElement('style');
371
+ // styleSheet.type = 'text/css';
372
+ styleSheet.id = 'Js_File_Loading_Style'
373
+ styleSheet.innerText = `
374
+ @keyframes spin {
375
+ 0% { transform: rotate(0deg); }
376
+ 100% { transform: rotate(360deg); }
377
+ }`;
378
+ document.head.appendChild(styleSheet);
379
+ }
380
+
381
+ function cancel_loading_status() {
382
+ var loadingElement = document.getElementById('Js_File_Loading');
383
+ if (loadingElement) {
384
+ document.body.removeChild(loadingElement); // remove the loader from the body
385
+ }
386
+ var loadingStyle = document.getElementById('Js_File_Loading_Style');
387
+ if (loadingStyle) {
388
+ document.head.removeChild(loadingStyle);
389
+ }
390
+ let clearButton = document.querySelectorAll('div[id*="elem_upload"] button[aria-label="Clear"]');
391
+ for (let button of clearButton) {
392
+ button.addEventListener('click', function () {
393
+ setTimeout(function () {
394
+ register_upload_event();
395
+ }, 50);
396
+ });
397
+ }
398
+ }
399
+
400
+ function register_upload_event() {
401
+ elem_upload_float = document.getElementById('elem_upload_float')
402
+ const upload_component = elem_upload_float.querySelector("input[type=file]");
403
+ if (upload_component) {
404
+ upload_component.addEventListener('change', function (event) {
405
+ toast_push('正在上传中,请稍等。', 2000);
406
+ begin_loading_status();
407
+ });
408
+ }
409
+ }
410
+
411
+ function monitoring_input_box() {
412
+ register_upload_event();
413
+
414
+ elem_upload = document.getElementById('elem_upload')
415
+ elem_upload_float = document.getElementById('elem_upload_float')
416
+ elem_input_main = document.getElementById('user_input_main')
417
+ elem_input_float = document.getElementById('user_input_float')
418
+ elem_chatbot = document.getElementById('gpt-chatbot')
419
+
420
+ if (elem_input_main) {
421
+ if (elem_input_main.querySelector("textarea")) {
422
+ add_func_paste(elem_input_main.querySelector("textarea"))
423
+ }
424
+ }
425
+ if (elem_input_float) {
426
+ if (elem_input_float.querySelector("textarea")) {
427
+ add_func_paste(elem_input_float.querySelector("textarea"))
428
+ }
429
+ }
430
+ if (elem_chatbot) {
431
+ add_func_drag(elem_chatbot)
432
+ }
433
+ }
434
+
435
+
436
+ // 监视页面变化
437
+ window.addEventListener("DOMContentLoaded", function () {
438
+ // const ga = document.getElementsByTagName("gradio-app");
439
+ gradioApp().addEventListener("render", monitoring_input_box);
440
+ });
441
+
442
+
443
+
444
+
445
+
446
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
447
+ // 第 5 部分: 音频按钮样式变化
448
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
449
+
450
+ function audio_fn_init() {
451
+ let audio_component = document.getElementById('elem_audio');
452
+ if (audio_component) {
453
+ let buttonElement = audio_component.querySelector('button');
454
+ let specificElement = audio_component.querySelector('.hide.sr-only');
455
+ specificElement.remove();
456
+
457
+ buttonElement.childNodes[1].nodeValue = '启动麦克风';
458
+ buttonElement.addEventListener('click', function (event) {
459
+ event.stopPropagation();
460
+ toast_push('您启动了麦克风!下一步请点击“实时语音对话”启动语音对话。');
461
+ });
462
+
463
+ // 查找语音插件按钮
464
+ let buttons = document.querySelectorAll('button');
465
+ let audio_button = null;
466
+ for (let button of buttons) {
467
+ if (button.textContent.includes('语音')) {
468
+ audio_button = button;
469
+ break;
470
+ }
471
+ }
472
+ if (audio_button) {
473
+ audio_button.addEventListener('click', function () {
474
+ toast_push('您点击了“实时语音对话”启动语音对话。');
475
+ });
476
+ let parent_element = audio_component.parentElement; // 将buttonElement移动到audio_button的内部
477
+ audio_button.appendChild(audio_component);
478
+ buttonElement.style.cssText = 'border-color: #00ffe0;border-width: 2px; height: 25px;'
479
+ parent_element.remove();
480
+ audio_component.style.cssText = 'width: 250px;right: 0px;display: inline-flex;flex-flow: row-reverse wrap;place-content: stretch space-between;align-items: center;background-color: #ffffff00;';
481
+ }
482
+
483
+ }
484
+ }
485
+
486
+
487
+
488
+
489
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
490
+ // 第 6 部分: JS初始化函数
491
+ // -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
492
+
493
+ function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
494
+ audio_fn_init();
495
+ chatbotIndicator = gradioApp().querySelector('#gpt-chatbot > div.wrap');
496
+ var chatbotObserver = new MutationObserver(() => {
497
+ chatbotContentChanged(1);
498
+ });
499
+ chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
500
+ if (LAYOUT === "LEFT-RIGHT") { chatbotAutoHeight(); }
501
  }
themes/green.css CHANGED
@@ -256,13 +256,13 @@ textarea.svelte-1pie7s6 {
256
  max-height: 95% !important;
257
  overflow-y: auto !important;
258
  }*/
259
- .app.svelte-1mya07g.svelte-1mya07g {
260
  max-width: 100%;
261
  position: relative;
262
  padding: var(--size-4);
263
  width: 100%;
264
  height: 100%;
265
- }
266
 
267
  .gradio-container-3-32-2 h1 {
268
  font-weight: 700 !important;
 
256
  max-height: 95% !important;
257
  overflow-y: auto !important;
258
  }*/
259
+ /* .app.svelte-1mya07g.svelte-1mya07g {
260
  max-width: 100%;
261
  position: relative;
262
  padding: var(--size-4);
263
  width: 100%;
264
  height: 100%;
265
+ } */
266
 
267
  .gradio-container-3-32-2 h1 {
268
  font-weight: 700 !important;
themes/theme.py CHANGED
@@ -1,6 +1,14 @@
1
- import gradio as gr
 
 
2
  from toolbox import get_conf
3
- THEME = get_conf('THEME')
 
 
 
 
 
 
4
 
5
  def load_dynamic_theme(THEME):
6
  adjust_dynamic_theme = None
@@ -20,4 +28,91 @@ def load_dynamic_theme(THEME):
20
  theme_declaration = ""
21
  return adjust_theme, advanced_css, theme_declaration, adjust_dynamic_theme
22
 
23
- adjust_theme, advanced_css, theme_declaration, _ = load_dynamic_theme(THEME)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pickle
2
+ import base64
3
+ import uuid
4
  from toolbox import get_conf
5
+
6
+ """
7
+ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
8
+ 第 1 部分
9
+ 加载主题相关的工具函数
10
+ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
11
+ """
12
 
13
  def load_dynamic_theme(THEME):
14
  adjust_dynamic_theme = None
 
28
  theme_declaration = ""
29
  return adjust_theme, advanced_css, theme_declaration, adjust_dynamic_theme
30
 
31
+ adjust_theme, advanced_css, theme_declaration, _ = load_dynamic_theme(get_conf('THEME'))
32
+
33
+
34
+
35
+
36
+
37
+
38
+ """
39
+ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
40
+ 第 2 部分
41
+ cookie相关工具函数
42
+ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
43
+ """
44
+
45
+ def init_cookie(cookies, chatbot):
46
+ # 为每一位访问的用户赋予一个独一无二的uuid编码
47
+ cookies.update({'uuid': uuid.uuid4()})
48
+ return cookies
49
+
50
+ def to_cookie_str(d):
51
+ # Pickle the dictionary and encode it as a string
52
+ pickled_dict = pickle.dumps(d)
53
+ cookie_value = base64.b64encode(pickled_dict).decode('utf-8')
54
+ return cookie_value
55
+
56
+ def from_cookie_str(c):
57
+ # Decode the base64-encoded string and unpickle it into a dictionary
58
+ pickled_dict = base64.b64decode(c.encode('utf-8'))
59
+ return pickle.loads(pickled_dict)
60
+
61
+
62
+
63
+
64
+
65
+ """
66
+ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
67
+ 第 3 部分
68
+ 内嵌的javascript代码
69
+ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
70
+ """
71
+
72
+ js_code_for_css_changing = """(css) => {
73
+ var existingStyles = document.querySelectorAll("body > gradio-app > div > style")
74
+ for (var i = 0; i < existingStyles.length; i++) {
75
+ var style = existingStyles[i];
76
+ style.parentNode.removeChild(style);
77
+ }
78
+ var existingStyles = document.querySelectorAll("style[data-loaded-css]");
79
+ for (var i = 0; i < existingStyles.length; i++) {
80
+ var style = existingStyles[i];
81
+ style.parentNode.removeChild(style);
82
+ }
83
+ var styleElement = document.createElement('style');
84
+ styleElement.setAttribute('data-loaded-css', 'placeholder');
85
+ styleElement.innerHTML = css;
86
+ document.body.appendChild(styleElement);
87
+ }
88
+ """
89
+
90
+ js_code_for_darkmode_init = """(dark) => {
91
+ dark = dark == "True";
92
+ if (document.querySelectorAll('.dark').length) {
93
+ if (!dark){
94
+ document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
95
+ }
96
+ } else {
97
+ if (dark){
98
+ document.querySelector('body').classList.add('dark');
99
+ }
100
+ }
101
+ }
102
+ """
103
+
104
+ js_code_for_toggle_darkmode = """() => {
105
+ if (document.querySelectorAll('.dark').length) {
106
+ document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
107
+ } else {
108
+ document.querySelector('body').classList.add('dark');
109
+ }
110
+ }"""
111
+
112
+
113
+ js_code_for_persistent_cookie_init = """(persistent_cookie) => {
114
+ return getCookie("persistent_cookie");
115
+ }
116
+ """
117
+
118
+
toolbox.py CHANGED
@@ -4,6 +4,7 @@ import time
4
  import inspect
5
  import re
6
  import os
 
7
  import gradio
8
  import shutil
9
  import glob
@@ -79,6 +80,7 @@ def ArgsGeneralWrapper(f):
79
  'max_length': max_length,
80
  'temperature':temperature,
81
  'client_ip': request.client.host,
 
82
  }
83
  plugin_kwargs = {
84
  "advanced_arg": plugin_advanced_arg,
@@ -178,12 +180,15 @@ def HotReload(f):
178
  最后,使用yield from语句返回重新加载过的函数,并在被装饰的函数上执行。
179
  最终,装饰器函数返回内部函数。这个内部函数可以将函数的原始定义更新为最新版本,并执行函数的新版本。
180
  """
181
- @wraps(f)
182
- def decorated(*args, **kwargs):
183
- fn_name = f.__name__
184
- f_hot_reload = getattr(importlib.reload(inspect.getmodule(f)), fn_name)
185
- yield from f_hot_reload(*args, **kwargs)
186
- return decorated
 
 
 
187
 
188
 
189
  """
@@ -561,7 +566,8 @@ def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
561
  user_name = get_user(chatbot)
562
  else:
563
  user_name = default_user_name
564
-
 
565
  user_path = get_log_folder(user_name, plugin_name=None)
566
  if file_already_in_downloadzone(file, user_path):
567
  new_path = file
@@ -577,7 +583,8 @@ def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
577
  if chatbot is not None:
578
  if 'files_to_promote' in chatbot._cookies: current = chatbot._cookies['files_to_promote']
579
  else: current = []
580
- chatbot._cookies.update({'files_to_promote': [new_path] + current})
 
581
  return new_path
582
 
583
 
@@ -602,6 +609,64 @@ def del_outdated_uploads(outdate_time_seconds, target_path_base=None):
602
  except: pass
603
  return
604
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
605
  def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkboxes, cookies):
606
  """
607
  当文件被上传时的回调函数
@@ -626,16 +691,15 @@ def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkbo
626
  this_file_path = pj(target_path_base, file_origin_name)
627
  shutil.move(file.name, this_file_path)
628
  upload_msg += extract_archive(file_path=this_file_path, dest_dir=this_file_path+'.extract')
629
-
630
- # 整理文件集合
631
- moved_files = [fp for fp in glob.glob(f'{target_path_base}/**/*', recursive=True)]
632
  if "浮动输入区" in checkboxes:
633
  txt, txt2 = "", target_path_base
634
  else:
635
  txt, txt2 = target_path_base, ""
636
 
637
- # 输出消息
638
- moved_files_str = '\t\n\n'.join(moved_files)
 
639
  chatbot.append(['我上传了文件,请查收',
640
  f'[Local Message] 收到以下文件: \n\n{moved_files_str}' +
641
  f'\n\n调用路径参数已自动修正到: \n\n{txt}' +
@@ -856,7 +920,14 @@ def read_single_conf_with_lru_cache(arg):
856
 
857
  @lru_cache(maxsize=128)
858
  def get_conf(*args):
859
- # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
 
 
 
 
 
 
 
860
  res = []
861
  for arg in args:
862
  r = read_single_conf_with_lru_cache(arg)
@@ -937,14 +1008,19 @@ def clip_history(inputs, history, tokenizer, max_token_limit):
937
  def get_token_num(txt):
938
  return len(tokenizer.encode(txt, disallowed_special=()))
939
  input_token_num = get_token_num(inputs)
 
 
 
 
 
940
  if input_token_num < max_token_limit * 3 / 4:
941
  # 当输入部分的token占比小于限制的3/4时,裁剪时
942
  # 1. 把input的余量留出来
943
  max_token_limit = max_token_limit - input_token_num
944
  # 2. 把输出用的余量留出来
945
- max_token_limit = max_token_limit - 128
946
  # 3. 如果余量太小了,直接清除历史
947
- if max_token_limit < 128:
948
  history = []
949
  return history
950
  else:
@@ -1053,7 +1129,7 @@ def get_user(chatbotwithcookies):
1053
 
1054
  class ProxyNetworkActivate():
1055
  """
1056
- 这段代码定义了一个名为TempProxy的空上下文管理器, 用于给一小段代码上代理
1057
  """
1058
  def __init__(self, task=None) -> None:
1059
  self.task = task
@@ -1198,6 +1274,35 @@ def get_chat_default_kwargs():
1198
 
1199
  return default_chat_kwargs
1200
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1201
  def get_max_token(llm_kwargs):
1202
  from request_llms.bridge_all import model_info
1203
  return model_info[llm_kwargs['llm_model']]['max_token']
 
4
  import inspect
5
  import re
6
  import os
7
+ import base64
8
  import gradio
9
  import shutil
10
  import glob
 
80
  'max_length': max_length,
81
  'temperature':temperature,
82
  'client_ip': request.client.host,
83
+ 'most_recent_uploaded': cookies.get('most_recent_uploaded')
84
  }
85
  plugin_kwargs = {
86
  "advanced_arg": plugin_advanced_arg,
 
180
  最后,使用yield from语句返回重新加载过的函数,并在被装饰的函数上执行。
181
  最终,装饰器函数返回内部函数。这个内部函数可以将函数的原始定义更新为最新版本,并执行函数的新版本。
182
  """
183
+ if get_conf('PLUGIN_HOT_RELOAD'):
184
+ @wraps(f)
185
+ def decorated(*args, **kwargs):
186
+ fn_name = f.__name__
187
+ f_hot_reload = getattr(importlib.reload(inspect.getmodule(f)), fn_name)
188
+ yield from f_hot_reload(*args, **kwargs)
189
+ return decorated
190
+ else:
191
+ return f
192
 
193
 
194
  """
 
566
  user_name = get_user(chatbot)
567
  else:
568
  user_name = default_user_name
569
+ if not os.path.exists(file):
570
+ raise FileNotFoundError(f'文件{file}不存在')
571
  user_path = get_log_folder(user_name, plugin_name=None)
572
  if file_already_in_downloadzone(file, user_path):
573
  new_path = file
 
583
  if chatbot is not None:
584
  if 'files_to_promote' in chatbot._cookies: current = chatbot._cookies['files_to_promote']
585
  else: current = []
586
+ if new_path not in current: # 避免把同一个文件添加多次
587
+ chatbot._cookies.update({'files_to_promote': [new_path] + current})
588
  return new_path
589
 
590
 
 
609
  except: pass
610
  return
611
 
612
+
613
+ def html_local_file(file):
614
+ base_path = os.path.dirname(__file__) # 项目目录
615
+ if os.path.exists(str(file)):
616
+ file = f'file={file.replace(base_path, ".")}'
617
+ return file
618
+
619
+
620
+ def html_local_img(__file, layout='left', max_width=None, max_height=None, md=True):
621
+ style = ''
622
+ if max_width is not None:
623
+ style += f"max-width: {max_width};"
624
+ if max_height is not None:
625
+ style += f"max-height: {max_height};"
626
+ __file = html_local_file(__file)
627
+ a = f'<div align="{layout}"><img src="{__file}" style="{style}"></div>'
628
+ if md:
629
+ a = f'![{__file}]({__file})'
630
+ return a
631
+
632
+ def file_manifest_filter_type(file_list, filter_: list = None):
633
+ new_list = []
634
+ if not filter_: filter_ = ['png', 'jpg', 'jpeg']
635
+ for file in file_list:
636
+ if str(os.path.basename(file)).split('.')[-1] in filter_:
637
+ new_list.append(html_local_img(file, md=False))
638
+ else:
639
+ new_list.append(file)
640
+ return new_list
641
+
642
+ def to_markdown_tabs(head: list, tabs: list, alignment=':---:', column=False):
643
+ """
644
+ Args:
645
+ head: 表头:[]
646
+ tabs: 表值:[[列1], [列2], [列3], [列4]]
647
+ alignment: :--- 左对齐, :---: 居中对齐, ---: 右对齐
648
+ column: True to keep data in columns, False to keep data in rows (default).
649
+ Returns:
650
+ A string representation of the markdown table.
651
+ """
652
+ if column:
653
+ transposed_tabs = list(map(list, zip(*tabs)))
654
+ else:
655
+ transposed_tabs = tabs
656
+ # Find the maximum length among the columns
657
+ max_len = max(len(column) for column in transposed_tabs)
658
+
659
+ tab_format = "| %s "
660
+ tabs_list = "".join([tab_format % i for i in head]) + '|\n'
661
+ tabs_list += "".join([tab_format % alignment for i in head]) + '|\n'
662
+
663
+ for i in range(max_len):
664
+ row_data = [tab[i] if i < len(tab) else '' for tab in transposed_tabs]
665
+ row_data = file_manifest_filter_type(row_data, filter_=None)
666
+ tabs_list += "".join([tab_format % i for i in row_data]) + '|\n'
667
+
668
+ return tabs_list
669
+
670
  def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkboxes, cookies):
671
  """
672
  当文件被上传时的回调函数
 
691
  this_file_path = pj(target_path_base, file_origin_name)
692
  shutil.move(file.name, this_file_path)
693
  upload_msg += extract_archive(file_path=this_file_path, dest_dir=this_file_path+'.extract')
694
+
 
 
695
  if "浮动输入区" in checkboxes:
696
  txt, txt2 = "", target_path_base
697
  else:
698
  txt, txt2 = target_path_base, ""
699
 
700
+ # 整理文件集合 输出消息
701
+ moved_files = [fp for fp in glob.glob(f'{target_path_base}/**/*', recursive=True)]
702
+ moved_files_str = to_markdown_tabs(head=['文件'], tabs=[moved_files])
703
  chatbot.append(['我上传了文件,请查收',
704
  f'[Local Message] 收到以下文件: \n\n{moved_files_str}' +
705
  f'\n\n调用路径参数已自动修正到: \n\n{txt}' +
 
920
 
921
  @lru_cache(maxsize=128)
922
  def get_conf(*args):
923
+ """
924
+ 本项目的所有配置都集中在config.py中。 修改配置有三种方法,您只需要选择其中一种即可:
925
+ - 直接修改config.py
926
+ - 创建并修改config_private.py
927
+ - 修改环境变量(修改docker-compose.yml等价于修改容器内部的环境变量)
928
+
929
+ 注意:如果您使用docker-compose部署,请修改docker-compose(等价于修改容器内部的环境变量)
930
+ """
931
  res = []
932
  for arg in args:
933
  r = read_single_conf_with_lru_cache(arg)
 
1008
  def get_token_num(txt):
1009
  return len(tokenizer.encode(txt, disallowed_special=()))
1010
  input_token_num = get_token_num(inputs)
1011
+
1012
+ if max_token_limit < 5000: output_token_expect = 256 # 4k & 2k models
1013
+ elif max_token_limit < 9000: output_token_expect = 512 # 8k models
1014
+ else: output_token_expect = 1024 # 16k & 32k models
1015
+
1016
  if input_token_num < max_token_limit * 3 / 4:
1017
  # 当输入部分的token占比小于限制的3/4时,裁剪时
1018
  # 1. 把input的余量留出来
1019
  max_token_limit = max_token_limit - input_token_num
1020
  # 2. 把输出用的余量留出来
1021
+ max_token_limit = max_token_limit - output_token_expect
1022
  # 3. 如果余量太小了,直接清除历史
1023
+ if max_token_limit < output_token_expect:
1024
  history = []
1025
  return history
1026
  else:
 
1129
 
1130
  class ProxyNetworkActivate():
1131
  """
1132
+ 这段代码定义了一个名为ProxyNetworkActivate的空上下文管理器, 用于给一小段代码上代理
1133
  """
1134
  def __init__(self, task=None) -> None:
1135
  self.task = task
 
1274
 
1275
  return default_chat_kwargs
1276
 
1277
+
1278
+ def get_pictures_list(path):
1279
+ file_manifest = [f for f in glob.glob(f'{path}/**/*.jpg', recursive=True)]
1280
+ file_manifest += [f for f in glob.glob(f'{path}/**/*.jpeg', recursive=True)]
1281
+ file_manifest += [f for f in glob.glob(f'{path}/**/*.png', recursive=True)]
1282
+ return file_manifest
1283
+
1284
+
1285
+ def have_any_recent_upload_image_files(chatbot):
1286
+ _5min = 5 * 60
1287
+ if chatbot is None: return False, None # chatbot is None
1288
+ most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
1289
+ if not most_recent_uploaded: return False, None # most_recent_uploaded is None
1290
+ if time.time() - most_recent_uploaded["time"] < _5min:
1291
+ most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
1292
+ path = most_recent_uploaded['path']
1293
+ file_manifest = get_pictures_list(path)
1294
+ if len(file_manifest) == 0: return False, None
1295
+ return True, file_manifest # most_recent_uploaded is new
1296
+ else:
1297
+ return False, None # most_recent_uploaded is too old
1298
+
1299
+
1300
+ # Function to encode the image
1301
+ def encode_image(image_path):
1302
+ with open(image_path, "rb") as image_file:
1303
+ return base64.b64encode(image_file.read()).decode('utf-8')
1304
+
1305
+
1306
  def get_max_token(llm_kwargs):
1307
  from request_llms.bridge_all import model_info
1308
  return model_info[llm_kwargs['llm_model']]['max_token']
version CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "version": 3.61,
3
  "show_feature": true,
4
- "new_feature": "修复潜在的多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版 <-> 修复本地模型在Windows下的加载BUG <-> 支持文心一言v4和星火v3 <-> 支持GLM3和智谱的API <-> 解决本地模型并发BUG <-> 支持动态追加基础功能按钮"
5
  }
 
1
  {
2
+ "version": 3.64,
3
  "show_feature": true,
4
+ "new_feature": "支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区 <-> 修复若干隐蔽的内存BUG <-> 修复多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版"
5
  }