Spaces:
Runtime error
Runtime error
Merge branch 'v3.3'
Browse files- README.md +6 -4
- config.py +8 -2
- crazy_functions/crazy_utils.py +6 -7
- crazy_functions/解析项目源代码.py +6 -3
- main.py +0 -3
- request_llm/bridge_all.py +17 -7
- request_llm/bridge_chatgpt.py +2 -2
- request_llm/bridge_newbing.py +250 -0
- request_llm/edge_gpt.py +409 -0
- request_llm/requirements_newbing.txt +8 -0
- toolbox.py +50 -7
- version +2 -2
README.md
CHANGED
@@ -25,24 +25,26 @@ If you like this project, please give it a Star. If you've come up with more use
|
|
25 |
--- | ---
|
26 |
一键润色 | 支持一键润色、一键查找论文语法错误
|
27 |
一键中英互译 | 一键中英互译
|
28 |
-
一键代码解释 |
|
29 |
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
30 |
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持代理连接OpenAI/Google等,秒解锁ChatGPT互联网[实时信息聚合](https://www.bilibili.com/video/BV1om4y127ck/)能力
|
31 |
模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
32 |
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
|
33 |
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
|
34 |
-
|
35 |
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
|
36 |
批量注释生成 | [函数插件] 一键批量生成函数注释
|
37 |
-
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
38 |
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
|
39 |
-
|
40 |
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
|
|
41 |
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
|
|
42 |
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
43 |
多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
|
44 |
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
|
45 |
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
|
|
|
46 |
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
|
47 |
…… | ……
|
48 |
|
|
|
25 |
--- | ---
|
26 |
一键润色 | 支持一键润色、一键查找论文语法错误
|
27 |
一键中英互译 | 一键中英互译
|
28 |
+
一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
|
29 |
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
30 |
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持代理连接OpenAI/Google等,秒解锁ChatGPT互联网[实时信息聚合](https://www.bilibili.com/video/BV1om4y127ck/)能力
|
31 |
模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
32 |
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
|
33 |
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
|
34 |
+
读论文、翻译论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要
|
35 |
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
|
36 |
批量注释生成 | [函数插件] 一键批量生成函数注释
|
|
|
37 |
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
|
38 |
+
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
39 |
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
40 |
+
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
41 |
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
42 |
+
互联网信息聚合+GPT | [函数插件] 一键让ChatGPT先Google搜索,再回答问题,信息流永不过时
|
43 |
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
44 |
多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
|
45 |
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
|
46 |
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
|
47 |
+
更多LLM模型接入 | 新加入Newbing测试接口(新必应AI)
|
48 |
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
|
49 |
…… | ……
|
50 |
|
config.py
CHANGED
@@ -45,7 +45,7 @@ MAX_RETRY = 2
|
|
45 |
|
46 |
# OpenAI模型选择是(gpt4现在只对申请成功的人开放,体验gpt-4可以试试api2d)
|
47 |
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
48 |
-
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm"]
|
49 |
|
50 |
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
|
51 |
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
@@ -58,8 +58,14 @@ CONCURRENT_COUNT = 100
|
|
58 |
AUTHENTICATION = []
|
59 |
|
60 |
# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
|
61 |
-
# 格式 {"https://api.openai.com/v1/chat/completions": "
|
62 |
API_URL_REDIRECT = {}
|
63 |
|
64 |
# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
|
65 |
CUSTOM_PATH = "/"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
# OpenAI模型选择是(gpt4现在只对申请成功的人开放,体验gpt-4可以试试api2d)
|
47 |
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
48 |
+
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing"]
|
49 |
|
50 |
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
|
51 |
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
|
|
58 |
AUTHENTICATION = []
|
59 |
|
60 |
# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
|
61 |
+
# 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
|
62 |
API_URL_REDIRECT = {}
|
63 |
|
64 |
# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
|
65 |
CUSTOM_PATH = "/"
|
66 |
+
|
67 |
+
# 如果需要使用newbing,把newbing的长长的cookie放到这里
|
68 |
+
NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"]
|
69 |
+
NEWBING_COOKIES = """
|
70 |
+
your bing cookies here
|
71 |
+
"""
|
crazy_functions/crazy_utils.py
CHANGED
@@ -1,5 +1,4 @@
|
|
1 |
-
import
|
2 |
-
from toolbox import update_ui, get_conf
|
3 |
|
4 |
def input_clipping(inputs, history, max_token_limit):
|
5 |
import numpy as np
|
@@ -94,12 +93,12 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
|
94 |
continue # 返回重试
|
95 |
else:
|
96 |
# 【选择放弃】
|
97 |
-
tb_str = '```\n' +
|
98 |
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
99 |
return mutable[0] # 放弃
|
100 |
except:
|
101 |
# 【第三种情况】:其他错误:重试几次
|
102 |
-
tb_str = '```\n' +
|
103 |
print(tb_str)
|
104 |
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
105 |
if retry_op > 0:
|
@@ -173,7 +172,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
|
173 |
if max_workers == -1: # 读取配置文件
|
174 |
try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
|
175 |
except: max_workers = 8
|
176 |
-
if max_workers <= 0
|
177 |
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
|
178 |
if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
|
179 |
max_workers = 1
|
@@ -220,14 +219,14 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
|
220 |
continue # 返回重试
|
221 |
else:
|
222 |
# 【选择放弃】
|
223 |
-
tb_str = '```\n' +
|
224 |
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
225 |
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
226 |
mutable[index][2] = "输入过长已放弃"
|
227 |
return gpt_say # 放弃
|
228 |
except:
|
229 |
# 【第三种情况】:其他错误
|
230 |
-
tb_str = '```\n' +
|
231 |
print(tb_str)
|
232 |
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
233 |
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
|
|
1 |
+
from toolbox import update_ui, get_conf, trimmed_format_exc
|
|
|
2 |
|
3 |
def input_clipping(inputs, history, max_token_limit):
|
4 |
import numpy as np
|
|
|
93 |
continue # 返回重试
|
94 |
else:
|
95 |
# 【选择放弃】
|
96 |
+
tb_str = '```\n' + trimmed_format_exc() + '```'
|
97 |
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
98 |
return mutable[0] # 放弃
|
99 |
except:
|
100 |
# 【第三种情况】:其他错误:重试几次
|
101 |
+
tb_str = '```\n' + trimmed_format_exc() + '```'
|
102 |
print(tb_str)
|
103 |
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
104 |
if retry_op > 0:
|
|
|
172 |
if max_workers == -1: # 读取配置文件
|
173 |
try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
|
174 |
except: max_workers = 8
|
175 |
+
if max_workers <= 0: max_workers = 3
|
176 |
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
|
177 |
if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
|
178 |
max_workers = 1
|
|
|
219 |
continue # 返回重试
|
220 |
else:
|
221 |
# 【选择放弃】
|
222 |
+
tb_str = '```\n' + trimmed_format_exc() + '```'
|
223 |
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
224 |
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
225 |
mutable[index][2] = "输入过长已放弃"
|
226 |
return gpt_say # 放弃
|
227 |
except:
|
228 |
# 【第三种情况】:其他错误
|
229 |
+
tb_str = '```\n' + trimmed_format_exc() + '```'
|
230 |
print(tb_str)
|
231 |
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
232 |
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
crazy_functions/解析项目源代码.py
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
from toolbox import update_ui
|
2 |
from toolbox import CatchException, report_execption, write_results_to_file
|
|
|
3 |
|
4 |
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
5 |
import os, copy
|
@@ -61,13 +62,15 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|
61 |
previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
62 |
previous_iteration_files_string = ', '.join(previous_iteration_files)
|
63 |
current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
64 |
-
i_say = f'
|
65 |
inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
|
66 |
this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
|
67 |
this_iteration_history.append(last_iteration_result)
|
|
|
|
|
68 |
result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
69 |
-
inputs=
|
70 |
-
history=
|
71 |
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
|
72 |
report_part_2.extend([i_say, result])
|
73 |
last_iteration_result = result
|
|
|
1 |
from toolbox import update_ui
|
2 |
from toolbox import CatchException, report_execption, write_results_to_file
|
3 |
+
from .crazy_utils import input_clipping
|
4 |
|
5 |
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
6 |
import os, copy
|
|
|
62 |
previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
63 |
previous_iteration_files_string = ', '.join(previous_iteration_files)
|
64 |
current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
65 |
+
i_say = f'用一张Markdown表格简要描述以下文件的功能:{previous_iteration_files_string}。根据以上分析,用一句话概括程序的整体功能。'
|
66 |
inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
|
67 |
this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
|
68 |
this_iteration_history.append(last_iteration_result)
|
69 |
+
# 裁剪input
|
70 |
+
inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560)
|
71 |
result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
72 |
+
inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
|
73 |
+
history=this_iteration_history_feed, # 迭代之前的分析
|
74 |
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
|
75 |
report_part_2.extend([i_say, result])
|
76 |
last_iteration_result = result
|
main.py
CHANGED
@@ -173,9 +173,6 @@ def main():
|
|
173 |
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
|
174 |
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
|
175 |
click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
|
176 |
-
# def expand_file_area(file_upload, area_file_up):
|
177 |
-
# if len(file_upload)>0: return {area_file_up: gr.update(open=True)}
|
178 |
-
# click_handle.then(expand_file_area, [file_upload, area_file_up], [area_file_up])
|
179 |
cancel_handles.append(click_handle)
|
180 |
# 终止按钮的回调函数注册
|
181 |
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
|
|
|
173 |
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
|
174 |
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
|
175 |
click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
|
|
|
|
|
|
|
176 |
cancel_handles.append(click_handle)
|
177 |
# 终止按钮的回调函数注册
|
178 |
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
|
request_llm/bridge_all.py
CHANGED
@@ -11,7 +11,7 @@
|
|
11 |
import tiktoken
|
12 |
from functools import lru_cache
|
13 |
from concurrent.futures import ThreadPoolExecutor
|
14 |
-
from toolbox import get_conf
|
15 |
|
16 |
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
|
17 |
from .bridge_chatgpt import predict as chatgpt_ui
|
@@ -19,6 +19,9 @@ from .bridge_chatgpt import predict as chatgpt_ui
|
|
19 |
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
|
20 |
from .bridge_chatglm import predict as chatglm_ui
|
21 |
|
|
|
|
|
|
|
22 |
# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
|
23 |
# from .bridge_tgui import predict as tgui_ui
|
24 |
|
@@ -48,6 +51,7 @@ class LazyloadTiktoken(object):
|
|
48 |
API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
|
49 |
openai_endpoint = "https://api.openai.com/v1/chat/completions"
|
50 |
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
|
|
|
51 |
# 兼容旧版的配置
|
52 |
try:
|
53 |
API_URL, = get_conf("API_URL")
|
@@ -59,6 +63,7 @@ except:
|
|
59 |
# 新版配置
|
60 |
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
|
61 |
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
|
|
|
62 |
|
63 |
|
64 |
# 获取tokenizer
|
@@ -116,7 +121,15 @@ model_info = {
|
|
116 |
"tokenizer": tokenizer_gpt35,
|
117 |
"token_cnt": get_token_num_gpt35,
|
118 |
},
|
119 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
120 |
}
|
121 |
|
122 |
|
@@ -128,10 +141,7 @@ def LLM_CATCH_EXCEPTION(f):
|
|
128 |
try:
|
129 |
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
|
130 |
except Exception as e:
|
131 |
-
|
132 |
-
import traceback
|
133 |
-
proxies, = get_conf('proxies')
|
134 |
-
tb_str = '\n```\n' + traceback.format_exc() + '\n```\n'
|
135 |
observe_window[0] = tb_str
|
136 |
return tb_str
|
137 |
return decorated
|
@@ -182,7 +192,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
|
|
182 |
|
183 |
def mutex_manager(window_mutex, observe_window):
|
184 |
while True:
|
185 |
-
time.sleep(0.
|
186 |
if not window_mutex[-1]: break
|
187 |
# 看门狗(watchdog)
|
188 |
for i in range(n_model):
|
|
|
11 |
import tiktoken
|
12 |
from functools import lru_cache
|
13 |
from concurrent.futures import ThreadPoolExecutor
|
14 |
+
from toolbox import get_conf, trimmed_format_exc
|
15 |
|
16 |
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
|
17 |
from .bridge_chatgpt import predict as chatgpt_ui
|
|
|
19 |
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
|
20 |
from .bridge_chatglm import predict as chatglm_ui
|
21 |
|
22 |
+
from .bridge_newbing import predict_no_ui_long_connection as newbing_noui
|
23 |
+
from .bridge_newbing import predict as newbing_ui
|
24 |
+
|
25 |
# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
|
26 |
# from .bridge_tgui import predict as tgui_ui
|
27 |
|
|
|
51 |
API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
|
52 |
openai_endpoint = "https://api.openai.com/v1/chat/completions"
|
53 |
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
|
54 |
+
newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
|
55 |
# 兼容旧版的配置
|
56 |
try:
|
57 |
API_URL, = get_conf("API_URL")
|
|
|
63 |
# 新版配置
|
64 |
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
|
65 |
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
|
66 |
+
if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
|
67 |
|
68 |
|
69 |
# 获取tokenizer
|
|
|
121 |
"tokenizer": tokenizer_gpt35,
|
122 |
"token_cnt": get_token_num_gpt35,
|
123 |
},
|
124 |
+
# newbing
|
125 |
+
"newbing": {
|
126 |
+
"fn_with_ui": newbing_ui,
|
127 |
+
"fn_without_ui": newbing_noui,
|
128 |
+
"endpoint": newbing_endpoint,
|
129 |
+
"max_token": 4096,
|
130 |
+
"tokenizer": tokenizer_gpt35,
|
131 |
+
"token_cnt": get_token_num_gpt35,
|
132 |
+
},
|
133 |
}
|
134 |
|
135 |
|
|
|
141 |
try:
|
142 |
return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
|
143 |
except Exception as e:
|
144 |
+
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
|
|
|
|
|
|
145 |
observe_window[0] = tb_str
|
146 |
return tb_str
|
147 |
return decorated
|
|
|
192 |
|
193 |
def mutex_manager(window_mutex, observe_window):
|
194 |
while True:
|
195 |
+
time.sleep(0.25)
|
196 |
if not window_mutex[-1]: break
|
197 |
# 看门狗(watchdog)
|
198 |
for i in range(n_model):
|
request_llm/bridge_chatgpt.py
CHANGED
@@ -21,7 +21,7 @@ import importlib
|
|
21 |
|
22 |
# config_private.py放自己的秘密如API和代理网址
|
23 |
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
24 |
-
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
|
25 |
proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
|
26 |
get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
|
27 |
|
@@ -215,7 +215,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|
215 |
chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
|
216 |
else:
|
217 |
from toolbox import regular_txt_to_markdown
|
218 |
-
tb_str = '```\n' +
|
219 |
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}")
|
220 |
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
|
221 |
return
|
|
|
21 |
|
22 |
# config_private.py放自己的秘密如API和代理网址
|
23 |
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
24 |
+
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc
|
25 |
proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
|
26 |
get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
|
27 |
|
|
|
215 |
chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
|
216 |
else:
|
217 |
from toolbox import regular_txt_to_markdown
|
218 |
+
tb_str = '```\n' + trimmed_format_exc() + '```'
|
219 |
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}")
|
220 |
yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
|
221 |
return
|
request_llm/bridge_newbing.py
ADDED
@@ -0,0 +1,250 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
========================================================================
|
3 |
+
第一部分:来自EdgeGPT.py
|
4 |
+
https://github.com/acheong08/EdgeGPT
|
5 |
+
========================================================================
|
6 |
+
"""
|
7 |
+
from .edge_gpt import NewbingChatbot
|
8 |
+
load_message = "等待NewBing响应。"
|
9 |
+
|
10 |
+
"""
|
11 |
+
========================================================================
|
12 |
+
第二部分:子进程Worker(调用主体)
|
13 |
+
========================================================================
|
14 |
+
"""
|
15 |
+
import time
|
16 |
+
import json
|
17 |
+
import re
|
18 |
+
import asyncio
|
19 |
+
import importlib
|
20 |
+
import threading
|
21 |
+
from toolbox import update_ui, get_conf, trimmed_format_exc
|
22 |
+
from multiprocessing import Process, Pipe
|
23 |
+
|
24 |
+
def preprocess_newbing_out(s):
|
25 |
+
pattern = r'\^(\d+)\^' # 匹配^数字^
|
26 |
+
sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值
|
27 |
+
result = re.sub(pattern, sub, s) # 替换操作
|
28 |
+
if '[1]' in result:
|
29 |
+
result += '\n\n```\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
|
30 |
+
return result
|
31 |
+
|
32 |
+
def preprocess_newbing_out_simple(result):
|
33 |
+
if '[1]' in result:
|
34 |
+
result += '\n\n```\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
|
35 |
+
return result
|
36 |
+
|
37 |
+
class NewBingHandle(Process):
|
38 |
+
def __init__(self):
|
39 |
+
super().__init__(daemon=True)
|
40 |
+
self.parent, self.child = Pipe()
|
41 |
+
self.newbing_model = None
|
42 |
+
self.info = ""
|
43 |
+
self.success = True
|
44 |
+
self.local_history = []
|
45 |
+
self.check_dependency()
|
46 |
+
self.start()
|
47 |
+
self.threadLock = threading.Lock()
|
48 |
+
|
49 |
+
def check_dependency(self):
|
50 |
+
try:
|
51 |
+
self.success = False
|
52 |
+
import certifi, httpx, rich
|
53 |
+
self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
|
54 |
+
self.success = True
|
55 |
+
except:
|
56 |
+
self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。"
|
57 |
+
self.success = False
|
58 |
+
|
59 |
+
def ready(self):
|
60 |
+
return self.newbing_model is not None
|
61 |
+
|
62 |
+
async def async_run(self):
|
63 |
+
# 读取配置
|
64 |
+
NEWBING_STYLE, = get_conf('NEWBING_STYLE')
|
65 |
+
from request_llm.bridge_all import model_info
|
66 |
+
endpoint = model_info['newbing']['endpoint']
|
67 |
+
while True:
|
68 |
+
# 等待
|
69 |
+
kwargs = self.child.recv()
|
70 |
+
question=kwargs['query']
|
71 |
+
history=kwargs['history']
|
72 |
+
system_prompt=kwargs['system_prompt']
|
73 |
+
|
74 |
+
# 是否重置
|
75 |
+
if len(self.local_history) > 0 and len(history)==0:
|
76 |
+
await self.newbing_model.reset()
|
77 |
+
self.local_history = []
|
78 |
+
|
79 |
+
# 开始问问题
|
80 |
+
prompt = ""
|
81 |
+
if system_prompt not in self.local_history:
|
82 |
+
self.local_history.append(system_prompt)
|
83 |
+
prompt += system_prompt + '\n'
|
84 |
+
|
85 |
+
# 追加历史
|
86 |
+
for ab in history:
|
87 |
+
a, b = ab
|
88 |
+
if a not in self.local_history:
|
89 |
+
self.local_history.append(a)
|
90 |
+
prompt += a + '\n'
|
91 |
+
if b not in self.local_history:
|
92 |
+
self.local_history.append(b)
|
93 |
+
prompt += b + '\n'
|
94 |
+
|
95 |
+
# 问题
|
96 |
+
prompt += question
|
97 |
+
self.local_history.append(question)
|
98 |
+
|
99 |
+
# 提交
|
100 |
+
async for final, response in self.newbing_model.ask_stream(
|
101 |
+
prompt=question,
|
102 |
+
conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"]
|
103 |
+
wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub"
|
104 |
+
):
|
105 |
+
if not final:
|
106 |
+
print(response)
|
107 |
+
self.child.send(str(response))
|
108 |
+
else:
|
109 |
+
print('-------- receive final ---------')
|
110 |
+
self.child.send('[Finish]')
|
111 |
+
|
112 |
+
|
113 |
+
def run(self):
|
114 |
+
"""
|
115 |
+
这个函数运行在子进程
|
116 |
+
"""
|
117 |
+
# 第一次运行,加载参数
|
118 |
+
self.success = False
|
119 |
+
self.local_history = []
|
120 |
+
if (self.newbing_model is None) or (not self.success):
|
121 |
+
# 代理设置
|
122 |
+
proxies, = get_conf('proxies')
|
123 |
+
if proxies is None:
|
124 |
+
self.proxies_https = None
|
125 |
+
else:
|
126 |
+
self.proxies_https = proxies['https']
|
127 |
+
# cookie
|
128 |
+
NEWBING_COOKIES, = get_conf('NEWBING_COOKIES')
|
129 |
+
try:
|
130 |
+
cookies = json.loads(NEWBING_COOKIES)
|
131 |
+
except:
|
132 |
+
self.success = False
|
133 |
+
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
134 |
+
self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。')
|
135 |
+
self.child.send('[Fail]')
|
136 |
+
self.child.send('[Finish]')
|
137 |
+
raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。")
|
138 |
+
|
139 |
+
try:
|
140 |
+
self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies)
|
141 |
+
except:
|
142 |
+
self.success = False
|
143 |
+
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
|
144 |
+
self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}')
|
145 |
+
self.child.send('[Fail]')
|
146 |
+
self.child.send('[Finish]')
|
147 |
+
raise RuntimeError(f"不能加载Newbing组件。")
|
148 |
+
|
149 |
+
self.success = True
|
150 |
+
try:
|
151 |
+
# 进入任务等待状态
|
152 |
+
asyncio.run(self.async_run())
|
153 |
+
except Exception:
|
154 |
+
tb_str = '```\n' + trimmed_format_exc() + '```'
|
155 |
+
self.child.send(f'[Local Message] Newbing失败 {tb_str}.')
|
156 |
+
self.child.send('[Fail]')
|
157 |
+
self.child.send('[Finish]')
|
158 |
+
|
159 |
+
def stream_chat(self, **kwargs):
|
160 |
+
"""
|
161 |
+
这个函数运行在主进程
|
162 |
+
"""
|
163 |
+
self.threadLock.acquire()
|
164 |
+
self.parent.send(kwargs) # 发送请求到子进程
|
165 |
+
while True:
|
166 |
+
res = self.parent.recv() # 等待newbing回复的片段
|
167 |
+
if res == '[Finish]':
|
168 |
+
break # 结束
|
169 |
+
elif res == '[Fail]':
|
170 |
+
self.success = False
|
171 |
+
break
|
172 |
+
else:
|
173 |
+
yield res # newbing回复的片段
|
174 |
+
self.threadLock.release()
|
175 |
+
|
176 |
+
|
177 |
+
"""
|
178 |
+
========================================================================
|
179 |
+
第三部分:主进程统一调用函数接口
|
180 |
+
========================================================================
|
181 |
+
"""
|
182 |
+
global newbing_handle
|
183 |
+
newbing_handle = None
|
184 |
+
|
185 |
+
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
|
186 |
+
"""
|
187 |
+
多线程方法
|
188 |
+
函数的说明请见 request_llm/bridge_all.py
|
189 |
+
"""
|
190 |
+
global newbing_handle
|
191 |
+
if (newbing_handle is None) or (not newbing_handle.success):
|
192 |
+
newbing_handle = NewBingHandle()
|
193 |
+
observe_window[0] = load_message + "\n\n" + newbing_handle.info
|
194 |
+
if not newbing_handle.success:
|
195 |
+
error = newbing_handle.info
|
196 |
+
newbing_handle = None
|
197 |
+
raise RuntimeError(error)
|
198 |
+
|
199 |
+
# 没有 sys_prompt 接口,因此把prompt加入 history
|
200 |
+
history_feedin = []
|
201 |
+
for i in range(len(history)//2):
|
202 |
+
history_feedin.append([history[2*i], history[2*i+1]] )
|
203 |
+
|
204 |
+
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
205 |
+
response = ""
|
206 |
+
observe_window[0] = "[Local Message]: 等待NewBing响应中 ..."
|
207 |
+
for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
208 |
+
observe_window[0] = preprocess_newbing_out_simple(response)
|
209 |
+
if len(observe_window) >= 2:
|
210 |
+
if (time.time()-observe_window[1]) > watch_dog_patience:
|
211 |
+
raise RuntimeError("程序终止。")
|
212 |
+
return preprocess_newbing_out_simple(response)
|
213 |
+
|
214 |
+
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
215 |
+
"""
|
216 |
+
单线程方法
|
217 |
+
函数的说明请见 request_llm/bridge_all.py
|
218 |
+
"""
|
219 |
+
chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ..."))
|
220 |
+
|
221 |
+
global newbing_handle
|
222 |
+
if (newbing_handle is None) or (not newbing_handle.success):
|
223 |
+
newbing_handle = NewBingHandle()
|
224 |
+
chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info)
|
225 |
+
yield from update_ui(chatbot=chatbot, history=[])
|
226 |
+
if not newbing_handle.success:
|
227 |
+
newbing_handle = None
|
228 |
+
return
|
229 |
+
|
230 |
+
if additional_fn is not None:
|
231 |
+
import core_functional
|
232 |
+
importlib.reload(core_functional) # 热更新prompt
|
233 |
+
core_functional = core_functional.get_core_functions()
|
234 |
+
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
235 |
+
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
236 |
+
|
237 |
+
history_feedin = []
|
238 |
+
for i in range(len(history)//2):
|
239 |
+
history_feedin.append([history[2*i], history[2*i+1]] )
|
240 |
+
|
241 |
+
chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...")
|
242 |
+
response = "[Local Message]: 等待NewBing响应中 ..."
|
243 |
+
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
244 |
+
for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
245 |
+
chatbot[-1] = (inputs, preprocess_newbing_out(response))
|
246 |
+
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
|
247 |
+
|
248 |
+
history.extend([inputs, preprocess_newbing_out(response)])
|
249 |
+
yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
|
250 |
+
|
request_llm/edge_gpt.py
ADDED
@@ -0,0 +1,409 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
========================================================================
|
3 |
+
第一部分:来自EdgeGPT.py
|
4 |
+
https://github.com/acheong08/EdgeGPT
|
5 |
+
========================================================================
|
6 |
+
"""
|
7 |
+
|
8 |
+
import argparse
|
9 |
+
import asyncio
|
10 |
+
import json
|
11 |
+
import os
|
12 |
+
import random
|
13 |
+
import re
|
14 |
+
import ssl
|
15 |
+
import sys
|
16 |
+
import uuid
|
17 |
+
from enum import Enum
|
18 |
+
from typing import Generator
|
19 |
+
from typing import Literal
|
20 |
+
from typing import Optional
|
21 |
+
from typing import Union
|
22 |
+
import websockets.client as websockets
|
23 |
+
|
24 |
+
DELIMITER = "\x1e"
|
25 |
+
|
26 |
+
|
27 |
+
# Generate random IP between range 13.104.0.0/14
|
28 |
+
FORWARDED_IP = (
|
29 |
+
f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
|
30 |
+
)
|
31 |
+
|
32 |
+
HEADERS = {
|
33 |
+
"accept": "application/json",
|
34 |
+
"accept-language": "en-US,en;q=0.9",
|
35 |
+
"content-type": "application/json",
|
36 |
+
"sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"',
|
37 |
+
"sec-ch-ua-arch": '"x86"',
|
38 |
+
"sec-ch-ua-bitness": '"64"',
|
39 |
+
"sec-ch-ua-full-version": '"109.0.1518.78"',
|
40 |
+
"sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
|
41 |
+
"sec-ch-ua-mobile": "?0",
|
42 |
+
"sec-ch-ua-model": "",
|
43 |
+
"sec-ch-ua-platform": '"Windows"',
|
44 |
+
"sec-ch-ua-platform-version": '"15.0.0"',
|
45 |
+
"sec-fetch-dest": "empty",
|
46 |
+
"sec-fetch-mode": "cors",
|
47 |
+
"sec-fetch-site": "same-origin",
|
48 |
+
"x-ms-client-request-id": str(uuid.uuid4()),
|
49 |
+
"x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32",
|
50 |
+
"Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx",
|
51 |
+
"Referrer-Policy": "origin-when-cross-origin",
|
52 |
+
"x-forwarded-for": FORWARDED_IP,
|
53 |
+
}
|
54 |
+
|
55 |
+
HEADERS_INIT_CONVER = {
|
56 |
+
"authority": "edgeservices.bing.com",
|
57 |
+
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
|
58 |
+
"accept-language": "en-US,en;q=0.9",
|
59 |
+
"cache-control": "max-age=0",
|
60 |
+
"sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
|
61 |
+
"sec-ch-ua-arch": '"x86"',
|
62 |
+
"sec-ch-ua-bitness": '"64"',
|
63 |
+
"sec-ch-ua-full-version": '"110.0.1587.69"',
|
64 |
+
"sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
|
65 |
+
"sec-ch-ua-mobile": "?0",
|
66 |
+
"sec-ch-ua-model": '""',
|
67 |
+
"sec-ch-ua-platform": '"Windows"',
|
68 |
+
"sec-ch-ua-platform-version": '"15.0.0"',
|
69 |
+
"sec-fetch-dest": "document",
|
70 |
+
"sec-fetch-mode": "navigate",
|
71 |
+
"sec-fetch-site": "none",
|
72 |
+
"sec-fetch-user": "?1",
|
73 |
+
"upgrade-insecure-requests": "1",
|
74 |
+
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69",
|
75 |
+
"x-edge-shopping-flag": "1",
|
76 |
+
"x-forwarded-for": FORWARDED_IP,
|
77 |
+
}
|
78 |
+
|
79 |
+
def get_ssl_context():
|
80 |
+
import certifi
|
81 |
+
ssl_context = ssl.create_default_context()
|
82 |
+
ssl_context.load_verify_locations(certifi.where())
|
83 |
+
return ssl_context
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
class NotAllowedToAccess(Exception):
|
88 |
+
pass
|
89 |
+
|
90 |
+
|
91 |
+
class ConversationStyle(Enum):
|
92 |
+
creative = "h3imaginative,clgalileo,gencontentv3"
|
93 |
+
balanced = "galileo"
|
94 |
+
precise = "h3precise,clgalileo"
|
95 |
+
|
96 |
+
|
97 |
+
CONVERSATION_STYLE_TYPE = Optional[
|
98 |
+
Union[ConversationStyle, Literal["creative", "balanced", "precise"]]
|
99 |
+
]
|
100 |
+
|
101 |
+
|
102 |
+
def _append_identifier(msg: dict) -> str:
|
103 |
+
"""
|
104 |
+
Appends special character to end of message to identify end of message
|
105 |
+
"""
|
106 |
+
# Convert dict to json string
|
107 |
+
return json.dumps(msg) + DELIMITER
|
108 |
+
|
109 |
+
|
110 |
+
def _get_ran_hex(length: int = 32) -> str:
|
111 |
+
"""
|
112 |
+
Returns random hex string
|
113 |
+
"""
|
114 |
+
return "".join(random.choice("0123456789abcdef") for _ in range(length))
|
115 |
+
|
116 |
+
|
117 |
+
class _ChatHubRequest:
|
118 |
+
"""
|
119 |
+
Request object for ChatHub
|
120 |
+
"""
|
121 |
+
|
122 |
+
def __init__(
|
123 |
+
self,
|
124 |
+
conversation_signature: str,
|
125 |
+
client_id: str,
|
126 |
+
conversation_id: str,
|
127 |
+
invocation_id: int = 0,
|
128 |
+
) -> None:
|
129 |
+
self.struct: dict = {}
|
130 |
+
|
131 |
+
self.client_id: str = client_id
|
132 |
+
self.conversation_id: str = conversation_id
|
133 |
+
self.conversation_signature: str = conversation_signature
|
134 |
+
self.invocation_id: int = invocation_id
|
135 |
+
|
136 |
+
def update(
|
137 |
+
self,
|
138 |
+
prompt,
|
139 |
+
conversation_style,
|
140 |
+
options,
|
141 |
+
) -> None:
|
142 |
+
"""
|
143 |
+
Updates request object
|
144 |
+
"""
|
145 |
+
if options is None:
|
146 |
+
options = [
|
147 |
+
"deepleo",
|
148 |
+
"enable_debug_commands",
|
149 |
+
"disable_emoji_spoken_text",
|
150 |
+
"enablemm",
|
151 |
+
]
|
152 |
+
if conversation_style:
|
153 |
+
if not isinstance(conversation_style, ConversationStyle):
|
154 |
+
conversation_style = getattr(ConversationStyle, conversation_style)
|
155 |
+
options = [
|
156 |
+
"nlu_direct_response_filter",
|
157 |
+
"deepleo",
|
158 |
+
"disable_emoji_spoken_text",
|
159 |
+
"responsible_ai_policy_235",
|
160 |
+
"enablemm",
|
161 |
+
conversation_style.value,
|
162 |
+
"dtappid",
|
163 |
+
"cricinfo",
|
164 |
+
"cricinfov2",
|
165 |
+
"dv3sugg",
|
166 |
+
]
|
167 |
+
self.struct = {
|
168 |
+
"arguments": [
|
169 |
+
{
|
170 |
+
"source": "cib",
|
171 |
+
"optionsSets": options,
|
172 |
+
"sliceIds": [
|
173 |
+
"222dtappid",
|
174 |
+
"225cricinfo",
|
175 |
+
"224locals0",
|
176 |
+
],
|
177 |
+
"traceId": _get_ran_hex(32),
|
178 |
+
"isStartOfSession": self.invocation_id == 0,
|
179 |
+
"message": {
|
180 |
+
"author": "user",
|
181 |
+
"inputMethod": "Keyboard",
|
182 |
+
"text": prompt,
|
183 |
+
"messageType": "Chat",
|
184 |
+
},
|
185 |
+
"conversationSignature": self.conversation_signature,
|
186 |
+
"participant": {
|
187 |
+
"id": self.client_id,
|
188 |
+
},
|
189 |
+
"conversationId": self.conversation_id,
|
190 |
+
},
|
191 |
+
],
|
192 |
+
"invocationId": str(self.invocation_id),
|
193 |
+
"target": "chat",
|
194 |
+
"type": 4,
|
195 |
+
}
|
196 |
+
self.invocation_id += 1
|
197 |
+
|
198 |
+
|
199 |
+
class _Conversation:
|
200 |
+
"""
|
201 |
+
Conversation API
|
202 |
+
"""
|
203 |
+
|
204 |
+
def __init__(
|
205 |
+
self,
|
206 |
+
cookies,
|
207 |
+
proxy,
|
208 |
+
) -> None:
|
209 |
+
self.struct: dict = {
|
210 |
+
"conversationId": None,
|
211 |
+
"clientId": None,
|
212 |
+
"conversationSignature": None,
|
213 |
+
"result": {"value": "Success", "message": None},
|
214 |
+
}
|
215 |
+
import httpx
|
216 |
+
self.proxy = proxy
|
217 |
+
proxy = (
|
218 |
+
proxy
|
219 |
+
or os.environ.get("all_proxy")
|
220 |
+
or os.environ.get("ALL_PROXY")
|
221 |
+
or os.environ.get("https_proxy")
|
222 |
+
or os.environ.get("HTTPS_PROXY")
|
223 |
+
or None
|
224 |
+
)
|
225 |
+
if proxy is not None and proxy.startswith("socks5h://"):
|
226 |
+
proxy = "socks5://" + proxy[len("socks5h://") :]
|
227 |
+
self.session = httpx.Client(
|
228 |
+
proxies=proxy,
|
229 |
+
timeout=30,
|
230 |
+
headers=HEADERS_INIT_CONVER,
|
231 |
+
)
|
232 |
+
for cookie in cookies:
|
233 |
+
self.session.cookies.set(cookie["name"], cookie["value"])
|
234 |
+
|
235 |
+
# Send GET request
|
236 |
+
response = self.session.get(
|
237 |
+
url=os.environ.get("BING_PROXY_URL")
|
238 |
+
or "https://edgeservices.bing.com/edgesvc/turing/conversation/create",
|
239 |
+
)
|
240 |
+
if response.status_code != 200:
|
241 |
+
response = self.session.get(
|
242 |
+
"https://edge.churchless.tech/edgesvc/turing/conversation/create",
|
243 |
+
)
|
244 |
+
if response.status_code != 200:
|
245 |
+
print(f"Status code: {response.status_code}")
|
246 |
+
print(response.text)
|
247 |
+
print(response.url)
|
248 |
+
raise Exception("Authentication failed")
|
249 |
+
try:
|
250 |
+
self.struct = response.json()
|
251 |
+
except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
|
252 |
+
raise Exception(
|
253 |
+
"Authentication failed. You have not been accepted into the beta.",
|
254 |
+
) from exc
|
255 |
+
if self.struct["result"]["value"] == "UnauthorizedRequest":
|
256 |
+
raise NotAllowedToAccess(self.struct["result"]["message"])
|
257 |
+
|
258 |
+
|
259 |
+
class _ChatHub:
|
260 |
+
"""
|
261 |
+
Chat API
|
262 |
+
"""
|
263 |
+
|
264 |
+
def __init__(self, conversation) -> None:
|
265 |
+
self.wss = None
|
266 |
+
self.request: _ChatHubRequest
|
267 |
+
self.loop: bool
|
268 |
+
self.task: asyncio.Task
|
269 |
+
print(conversation.struct)
|
270 |
+
self.request = _ChatHubRequest(
|
271 |
+
conversation_signature=conversation.struct["conversationSignature"],
|
272 |
+
client_id=conversation.struct["clientId"],
|
273 |
+
conversation_id=conversation.struct["conversationId"],
|
274 |
+
)
|
275 |
+
|
276 |
+
async def ask_stream(
|
277 |
+
self,
|
278 |
+
prompt: str,
|
279 |
+
wss_link: str,
|
280 |
+
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
281 |
+
raw: bool = False,
|
282 |
+
options: dict = None,
|
283 |
+
) -> Generator[str, None, None]:
|
284 |
+
"""
|
285 |
+
Ask a question to the bot
|
286 |
+
"""
|
287 |
+
if self.wss and not self.wss.closed:
|
288 |
+
await self.wss.close()
|
289 |
+
# Check if websocket is closed
|
290 |
+
self.wss = await websockets.connect(
|
291 |
+
wss_link,
|
292 |
+
extra_headers=HEADERS,
|
293 |
+
max_size=None,
|
294 |
+
ssl=get_ssl_context()
|
295 |
+
)
|
296 |
+
await self._initial_handshake()
|
297 |
+
# Construct a ChatHub request
|
298 |
+
self.request.update(
|
299 |
+
prompt=prompt,
|
300 |
+
conversation_style=conversation_style,
|
301 |
+
options=options,
|
302 |
+
)
|
303 |
+
# Send request
|
304 |
+
await self.wss.send(_append_identifier(self.request.struct))
|
305 |
+
final = False
|
306 |
+
while not final:
|
307 |
+
objects = str(await self.wss.recv()).split(DELIMITER)
|
308 |
+
for obj in objects:
|
309 |
+
if obj is None or not obj:
|
310 |
+
continue
|
311 |
+
response = json.loads(obj)
|
312 |
+
if response.get("type") != 2 and raw:
|
313 |
+
yield False, response
|
314 |
+
elif response.get("type") == 1 and response["arguments"][0].get(
|
315 |
+
"messages",
|
316 |
+
):
|
317 |
+
resp_txt = response["arguments"][0]["messages"][0]["adaptiveCards"][
|
318 |
+
0
|
319 |
+
]["body"][0].get("text")
|
320 |
+
yield False, resp_txt
|
321 |
+
elif response.get("type") == 2:
|
322 |
+
final = True
|
323 |
+
yield True, response
|
324 |
+
|
325 |
+
async def _initial_handshake(self) -> None:
|
326 |
+
await self.wss.send(_append_identifier({"protocol": "json", "version": 1}))
|
327 |
+
await self.wss.recv()
|
328 |
+
|
329 |
+
async def close(self) -> None:
|
330 |
+
"""
|
331 |
+
Close the connection
|
332 |
+
"""
|
333 |
+
if self.wss and not self.wss.closed:
|
334 |
+
await self.wss.close()
|
335 |
+
|
336 |
+
|
337 |
+
class NewbingChatbot:
|
338 |
+
"""
|
339 |
+
Combines everything to make it seamless
|
340 |
+
"""
|
341 |
+
|
342 |
+
def __init__(
|
343 |
+
self,
|
344 |
+
cookies,
|
345 |
+
proxy
|
346 |
+
) -> None:
|
347 |
+
if cookies is None:
|
348 |
+
cookies = {}
|
349 |
+
self.cookies = cookies
|
350 |
+
self.proxy = proxy
|
351 |
+
self.chat_hub: _ChatHub = _ChatHub(
|
352 |
+
_Conversation(self.cookies, self.proxy),
|
353 |
+
)
|
354 |
+
|
355 |
+
async def ask(
|
356 |
+
self,
|
357 |
+
prompt: str,
|
358 |
+
wss_link: str,
|
359 |
+
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
360 |
+
options: dict = None,
|
361 |
+
) -> dict:
|
362 |
+
"""
|
363 |
+
Ask a question to the bot
|
364 |
+
"""
|
365 |
+
async for final, response in self.chat_hub.ask_stream(
|
366 |
+
prompt=prompt,
|
367 |
+
conversation_style=conversation_style,
|
368 |
+
wss_link=wss_link,
|
369 |
+
options=options,
|
370 |
+
):
|
371 |
+
if final:
|
372 |
+
return response
|
373 |
+
await self.chat_hub.wss.close()
|
374 |
+
return None
|
375 |
+
|
376 |
+
async def ask_stream(
|
377 |
+
self,
|
378 |
+
prompt: str,
|
379 |
+
wss_link: str,
|
380 |
+
conversation_style: CONVERSATION_STYLE_TYPE = None,
|
381 |
+
raw: bool = False,
|
382 |
+
options: dict = None,
|
383 |
+
) -> Generator[str, None, None]:
|
384 |
+
"""
|
385 |
+
Ask a question to the bot
|
386 |
+
"""
|
387 |
+
async for response in self.chat_hub.ask_stream(
|
388 |
+
prompt=prompt,
|
389 |
+
conversation_style=conversation_style,
|
390 |
+
wss_link=wss_link,
|
391 |
+
raw=raw,
|
392 |
+
options=options,
|
393 |
+
):
|
394 |
+
yield response
|
395 |
+
|
396 |
+
async def close(self) -> None:
|
397 |
+
"""
|
398 |
+
Close the connection
|
399 |
+
"""
|
400 |
+
await self.chat_hub.close()
|
401 |
+
|
402 |
+
async def reset(self) -> None:
|
403 |
+
"""
|
404 |
+
Reset the conversation
|
405 |
+
"""
|
406 |
+
await self.close()
|
407 |
+
self.chat_hub = _ChatHub(_Conversation(self.cookies, self.proxy))
|
408 |
+
|
409 |
+
|
request_llm/requirements_newbing.txt
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
BingImageCreator
|
2 |
+
certifi
|
3 |
+
httpx
|
4 |
+
prompt_toolkit
|
5 |
+
requests
|
6 |
+
rich
|
7 |
+
websockets
|
8 |
+
httpx[socks]
|
toolbox.py
CHANGED
@@ -5,7 +5,20 @@ import inspect
|
|
5 |
import re
|
6 |
from latex2mathml.converter import convert as tex2mathml
|
7 |
from functools import wraps, lru_cache
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
class ChatBotWithCookies(list):
|
10 |
def __init__(self, cookie):
|
11 |
self._cookies = cookie
|
@@ -20,6 +33,7 @@ class ChatBotWithCookies(list):
|
|
20 |
def get_cookies(self):
|
21 |
return self._cookies
|
22 |
|
|
|
23 |
def ArgsGeneralWrapper(f):
|
24 |
"""
|
25 |
装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
|
@@ -47,6 +61,7 @@ def ArgsGeneralWrapper(f):
|
|
47 |
yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
|
48 |
return decorated
|
49 |
|
|
|
50 |
def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
|
51 |
"""
|
52 |
刷新用户界面
|
@@ -54,10 +69,18 @@ def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
|
|
54 |
assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。"
|
55 |
yield chatbot.get_cookies(), chatbot, history, msg
|
56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
def CatchException(f):
|
58 |
"""
|
59 |
装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
|
60 |
"""
|
|
|
61 |
@wraps(f)
|
62 |
def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
|
63 |
try:
|
@@ -66,7 +89,7 @@ def CatchException(f):
|
|
66 |
from check_proxy import check_proxy
|
67 |
from toolbox import get_conf
|
68 |
proxies, = get_conf('proxies')
|
69 |
-
tb_str = '```\n' +
|
70 |
if chatbot is None or len(chatbot) == 0:
|
71 |
chatbot = [["插件调度异常", "异常原因"]]
|
72 |
chatbot[-1] = (chatbot[-1][0],
|
@@ -93,7 +116,23 @@ def HotReload(f):
|
|
93 |
return decorated
|
94 |
|
95 |
|
96 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
|
98 |
def get_reduce_token_percent(text):
|
99 |
"""
|
@@ -113,7 +152,6 @@ def get_reduce_token_percent(text):
|
|
113 |
return 0.5, '不详'
|
114 |
|
115 |
|
116 |
-
|
117 |
def write_results_to_file(history, file_name=None):
|
118 |
"""
|
119 |
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
|
@@ -369,6 +407,9 @@ def find_recent_files(directory):
|
|
369 |
|
370 |
|
371 |
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
|
|
|
|
|
|
|
372 |
if len(files) == 0:
|
373 |
return chatbot, txt
|
374 |
import shutil
|
@@ -388,8 +429,7 @@ def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
|
|
388 |
shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
|
389 |
err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
|
390 |
dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
|
391 |
-
moved_files = [fp for fp in glob.glob(
|
392 |
-
'private_upload/**/*', recursive=True)]
|
393 |
if "底部输入区" in checkboxes:
|
394 |
txt = ""
|
395 |
txt2 = f'private_upload/{time_tag}'
|
@@ -508,7 +548,7 @@ def clear_line_break(txt):
|
|
508 |
class DummyWith():
|
509 |
"""
|
510 |
这段代码定义了一个名为DummyWith的空上下文管理器,
|
511 |
-
|
512 |
上下文管理器是一种Python对象,用于与with语句一起使用,
|
513 |
以确保一些资源在代码块执行期间得到正确的初始化和清理。
|
514 |
上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
|
@@ -522,6 +562,9 @@ class DummyWith():
|
|
522 |
return
|
523 |
|
524 |
def run_gradio_in_subpath(demo, auth, port, custom_path):
|
|
|
|
|
|
|
525 |
def is_path_legal(path: str)->bool:
|
526 |
'''
|
527 |
check path for sub url
|
|
|
5 |
import re
|
6 |
from latex2mathml.converter import convert as tex2mathml
|
7 |
from functools import wraps, lru_cache
|
8 |
+
|
9 |
+
"""
|
10 |
+
========================================================================
|
11 |
+
第一部分
|
12 |
+
函数插件输入输出接驳区
|
13 |
+
- ChatBotWithCookies: 带Cookies的Chatbot类,为实现更多强大的功能做基础
|
14 |
+
- ArgsGeneralWrapper: 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构
|
15 |
+
- update_ui: 刷新界面用 yield from update_ui(chatbot, history)
|
16 |
+
- CatchException: 将插件中出的所有问题显示在界面上
|
17 |
+
- HotReload: 实现插件的热更新
|
18 |
+
- trimmed_format_exc: 打印traceback,为了安全而隐藏绝对地址
|
19 |
+
========================================================================
|
20 |
+
"""
|
21 |
+
|
22 |
class ChatBotWithCookies(list):
|
23 |
def __init__(self, cookie):
|
24 |
self._cookies = cookie
|
|
|
33 |
def get_cookies(self):
|
34 |
return self._cookies
|
35 |
|
36 |
+
|
37 |
def ArgsGeneralWrapper(f):
|
38 |
"""
|
39 |
装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
|
|
|
61 |
yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
|
62 |
return decorated
|
63 |
|
64 |
+
|
65 |
def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面
|
66 |
"""
|
67 |
刷新用户界面
|
|
|
69 |
assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。"
|
70 |
yield chatbot.get_cookies(), chatbot, history, msg
|
71 |
|
72 |
+
def trimmed_format_exc():
|
73 |
+
import os, traceback
|
74 |
+
str = traceback.format_exc()
|
75 |
+
current_path = os.getcwd()
|
76 |
+
replace_path = "."
|
77 |
+
return str.replace(current_path, replace_path)
|
78 |
+
|
79 |
def CatchException(f):
|
80 |
"""
|
81 |
装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
|
82 |
"""
|
83 |
+
|
84 |
@wraps(f)
|
85 |
def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
|
86 |
try:
|
|
|
89 |
from check_proxy import check_proxy
|
90 |
from toolbox import get_conf
|
91 |
proxies, = get_conf('proxies')
|
92 |
+
tb_str = '```\n' + trimmed_format_exc() + '```'
|
93 |
if chatbot is None or len(chatbot) == 0:
|
94 |
chatbot = [["插件调度异常", "异常原因"]]
|
95 |
chatbot[-1] = (chatbot[-1][0],
|
|
|
116 |
return decorated
|
117 |
|
118 |
|
119 |
+
"""
|
120 |
+
========================================================================
|
121 |
+
第二部分
|
122 |
+
其他小工具:
|
123 |
+
- write_results_to_file: 将结果写入markdown文件中
|
124 |
+
- regular_txt_to_markdown: 将普通文本转换为Markdown格式的文本。
|
125 |
+
- report_execption: 向chatbot中添加简单的意外错误信息
|
126 |
+
- text_divide_paragraph: 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
|
127 |
+
- markdown_convertion: 用多种方式组合,将markdown转化为好看的html
|
128 |
+
- format_io: 接管gradio默认的markdown处理方式
|
129 |
+
- on_file_uploaded: 处理文件的上传(自动解压)
|
130 |
+
- on_report_generated: 将生成的报告自动投射到文件上传区
|
131 |
+
- clip_history: 当历史上下文过长时,自动截断
|
132 |
+
- get_conf: 获取设置
|
133 |
+
- select_api_key: 根据当前的模型类别,抽取可用的api-key
|
134 |
+
========================================================================
|
135 |
+
"""
|
136 |
|
137 |
def get_reduce_token_percent(text):
|
138 |
"""
|
|
|
152 |
return 0.5, '不详'
|
153 |
|
154 |
|
|
|
155 |
def write_results_to_file(history, file_name=None):
|
156 |
"""
|
157 |
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
|
|
|
407 |
|
408 |
|
409 |
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
|
410 |
+
"""
|
411 |
+
当文件被上传时的回调函数
|
412 |
+
"""
|
413 |
if len(files) == 0:
|
414 |
return chatbot, txt
|
415 |
import shutil
|
|
|
429 |
shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
|
430 |
err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
|
431 |
dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
|
432 |
+
moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)]
|
|
|
433 |
if "底部输入区" in checkboxes:
|
434 |
txt = ""
|
435 |
txt2 = f'private_upload/{time_tag}'
|
|
|
548 |
class DummyWith():
|
549 |
"""
|
550 |
这段代码定义了一个名为DummyWith的空上下文管理器,
|
551 |
+
它的作用是……额……就是不起作用,即在代码结构不变得情况下取代其他的上下文管理器。
|
552 |
上下文管理器是一种Python对象,用于与with语句一起使用,
|
553 |
以确保一些资源在代码块执行期间得到正确的初始化和清理。
|
554 |
上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
|
|
|
562 |
return
|
563 |
|
564 |
def run_gradio_in_subpath(demo, auth, port, custom_path):
|
565 |
+
"""
|
566 |
+
把gradio的运行地址更改到指定的二次路径上
|
567 |
+
"""
|
568 |
def is_path_legal(path: str)->bool:
|
569 |
'''
|
570 |
check path for sub url
|
version
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
{
|
2 |
-
"version": 3.
|
3 |
"show_feature": true,
|
4 |
-
"new_feature": "保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网(Google)回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D(国内,可支持gpt4)"
|
5 |
}
|
|
|
1 |
{
|
2 |
+
"version": 3.3,
|
3 |
"show_feature": true,
|
4 |
+
"new_feature": "支持NewBing !! <-> 保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网(Google)回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D(国内,可支持gpt4)"
|
5 |
}
|