Invalid Literal error, and other problems with Oobabooga+SillyTavern

#1
by yumeshiro - opened

I'm getting an issue when loading the model, and another when trying to use it.

Here's the error when loading:

Traceback (most recent call last):
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\gradio\queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1550, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\gradio\utils.py", line 661, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\modules\models_settings.py", line 199, in update_model_parameters
value = int(value)
^^^^^^^^^^
ValueError: invalid literal for int() with base 10: '5.0'
Traceback (most recent call last):
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\gradio\queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1550, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\gradio\utils.py", line 661, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\modules\models_settings.py", line 199, in update_model_parameters
value = int(value)
^^^^^^^^^^
ValueError: invalid literal for int() with base 10: '5.0'

Here's when it tries to generate a response with SillyTavern:

Traceback (most recent call last):
File "D:\0\Oobabooga\modules\callbacks.py", line 61, in gentask
ret = self.mfunc(callback=_callback, *args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\modules\text_generation.py", line 397, in generate_with_callback
shared.model.generate(**kwargs)
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\transformers\generation\utils.py", line 1592, in generate
return self.sample(
^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\transformers\generation\utils.py", line 2696, in sample
outputs = self(
^^^^^
File "D:\0\Oobabooga\modules\exllamav2_hf.py", line 136, in call
self.ex_model.forward(seq_tensor[:-1].view(1, -1), ex_cache, preprocess_only=True, loras=self.loras)
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\0\Oobabooga\installer_files\env\Lib\site-packages\exllamav2\model.py", line 553, in forward
assert past_len + q_len <= cache.max_seq_len, "Total sequence length exceeds cache size in model.forward"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Total sequence length exceeds cache size in model.forward
Output generated in 0.51 seconds (0.00 tokens/s, 0 tokens, context 2062, seed 2010613599)

Both Oobabooga and SillyTavern are current versions. All of my searching for similar problems and their resolutions has been fruitless.

Edit The same happens with exl2-6.0 and exl2-4.0 versions.

Sign up or log in to comment