🚩 Report : Not working

#3
by sav7669 - opened

Hello,
The demo was working correctly before,
But now getting unknown json error, could you please look into this
Thanks

Hi @sav7669
I have fixed the error and created a pull request (PR) .
You can wait for the PR being accepted or test it immediately at (https://huggingface.co/spaces/thinh-researcher/cord-v2)

NAVER CLOVA INFORMATION EXTRACTION org

Thank you for reporting the issue :)
This issue is resolved by @thinh-researcher 's PR

That's great !
I could see the interface is working in the app hosted in hugging face.

Could you please let me know what do I need to change while running through Google collab, as I could see the same error there.
I tried installing transformers==4.24.0, but still no luck.
Appreciate any solutions :)
Link to collab
https://colab.research.google.com/drive/1o07hty-3OQTvGnc_7lgQFLvvKQuLjqiw?usp=sharing

@sav7669
You should change !pip install donut-python to

!pip install git+https://github.com/clovaai/donut
!pip install gradio

Cloned from your colab notebook: https://colab.research.google.com/drive/1LH-VZaUdwLtmXyHXeb4iYJ3zW9bR3thy?usp=sharing

I am getting the following error, may I know what is wrong? using python 3.10
python app.py
/home/ubuntu/miniconda/envs/donut/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Traceback (most recent call last):
File "/home/ubuntu/donut/donut-base-finetuned-cord-v2/app.py", line 27, in
pretrained_model = DonutModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
File "/home/ubuntu/miniconda/envs/donut/lib/python3.10/site-packages/donut/model.py", line 593, in from_pretrained
model = super(DonutModel, cls).from_pretrained(pretrained_model_name_or_path, revision="official", *model_args, **kwargs)
File "/home/ubuntu/miniconda/envs/donut/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2379, in from_pretrained
) = cls._load_pretrained_model(
File "/home/ubuntu/miniconda/envs/donut/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2525, in _load_pretrained_model
model._init_weights(module)
File "/home/ubuntu/miniconda/envs/donut/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1105, in _init_weights
raise NotImplementedError(f"Make sure _init_weights is implemented for {self.class}")
NotImplementedError: Make sure _init_weights is implemented for <class 'donut.model.DonutModel'>

@dgsaibal Since you are using the Donut model for inference, simply add a new function _init_weights into the class DonutModel

class DonutModel(PreTrainedModel):
    ...
    def _init_weights(self, module):
        pass
   ...

Hope this help.
For more information, the package transformers is in the growing stage, there are many changes that seem breaking some code

RuntimeError: Error(s) in loading state_dict for DonutModel:
size mismatch for encoder.model.layers.1.downsample.norm.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoder.model.layers.1.downsample.norm.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoder.model.layers.1.downsample.reduction.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for encoder.model.layers.2.downsample.norm.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for encoder.model.layers.2.downsample.norm.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for encoder.model.layers.2.downsample.reduction.weight: copying a param with shape torch.Size([1024, 2048]) from checkpoint, the shape in current model is torch.Size([512, 1024]).
You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.

Then I am using below one:
DonutModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2",ignore_mismatched_sizes=True)

But after that it is giving below error, can anyone please help.
AttributeError: 'SwinTransformer' object has no attribute 'pos_drop'

@Joyantac33
SwinTransformer is imported from timm.
To fix it, you can either pip install timm==0.6.13 or comment the line x = self.model.pos_drop(x) in the function SwinEncoder.forward(...) (in the file donut/model.py).

For more information:
In SwinTransformer (timm==0.6.13), pos_drop is defined as a Dropout self.pos_drop = nn.Dropout(p=drop_rate)
Since Donut uses SwinTransformer without declaring drop_rate so drop_rate=0. by default,

# in file donut/model.py 
class SwinEncoder(nn.Module):
  def __init__(...):
      ...
      self.model = SwinTransformer(
            img_size=self.input_size,
            depths=self.encoder_layer,
            window_size=self.window_size,
            patch_size=4,
            embed_dim=128,
            num_heads=[4, 8, 16, 32],
            num_classes=0,
      )
      ...

which means you can safely ignore the line x = self.model.pos_drop(x).

I am using "naver-clova-ix/donut-base-finetuned-docvqa" model and want to print the full content of the result json after it reads the image without invoking any prompts or user input. I just want it to parse the image and give me the full json content. How can I achieve that, please help. I am using below code:

import re
import gradio as gr

import torch
from transformers import DonutProcessor, VisionEncoderDecoderModel

processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")

device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)

def process_document(image, question):
# prepare encoder inputs
pixel_values = processor(image, return_tensors="pt").pixel_values
print(pixel_values)
# prepare decoder inputs
task_prompt = "{user_input}"
prompt = task_prompt.replace("{user_input}", question)
decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids
print(decoder_input_ids)
# generate answer
outputs = model.generate(
pixel_values.to(device),
decoder_input_ids=decoder_input_ids.to(device),
max_length=model.decoder.config.max_position_embeddings,
early_stopping=True,
pad_token_id=processor.tokenizer.pad_token_id,
eos_token_id=processor.tokenizer.eos_token_id,
use_cache=True,
num_beams=1,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
return_dict_in_generate=True,
)

# postprocess
sequence = processor.batch_decode(outputs.sequences)[0]
sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
sequence = re.sub(r"<.*?>", "", sequence, count=1).strip()  # remove first task start token

json_content = processor.token2json(sequence)
print(json_content)  # Print the full JSON content

return json_content
#return processor.token2json(sequence)

description = "Gradio Demo for Donut, an instance of VisionEncoderDecoderModel fine-tuned on DocVQA (document visual question answering). To use it, simply upload your image and type a question and click 'submit', or click one of the examples to load them. Read more at the links below."
article = "

Donut: OCR-free Document Understanding Transformer | Github Repo

"

demo = gr.Interface(
fn=process_document,
inputs=["image", "text"],
outputs="json",
title="Demo: Donut 🍩 for DocVQA",
description=description,
article=article,
enable_queue=True,
examples=[["example_1.png", "When is the coffee break?"], ["example_2.jpeg", "What's the population of Stoddard?"]],
cache_examples=False)

demo.launch()

I am trying to train custom model but getting below error, can you help.

!python /content/donut/train.py --config /content/sample_data/train_cord.yaml

dataset_name_or_paths:

../content/pan_set
train_batch_sizes:
1
check_val_every_n_epochs: 10
max_steps: -1
result_path: /content/pan_set
exp_name: train_cord
exp_version: 20230630_083635
Config is saved at /content/pan_set/train_cord/20230630_083635/config.yaml
Traceback (most recent call last):
File "/content/donut/train.py", line 149, in
train(config)
File "/content/donut/train.py", line 55, in train
pl.utilities.seed.seed_everything(config.get("seed", 42), workers=True)
AttributeError: module 'pytorch_lightning.utilities.seed' has no attribute 'seed_everything'

NAVER CLOVA INFORMATION EXTRACTION org

Thank you all for the issue reports and discussions. I resolved some minor issues on the versions of dependency libraries (e.g., timm). The demo is functioning properly now.

Sign up or log in to comment