Model Description
cmarkea/bloomz-3b-dpo-table-qa-latex is a fine-tuned version of the cmarkea/bloomz-3b-dpo-chat model, specialized for table-based question answering (QA) tasks. The model has been trained on the table-vqa dataset, which was developed by Crédit Mutuel Arkéa, and it processes tables provided in their LaTeX source format.
This model is optimized for multilingual environments, supporting both French and English, and is especially effective in extracting and interpreting tabular data from documents. It has been fine-tuned for 2 days on an A100 40GB GPU and operates in bfloat16 precision to maximize resource efficiency.
Key Features
- Domain: Table-based question answering (QA), particularly for extracting information from LaTeX-format tables.
- Language Support: French and English, making it suitable for multilingual environments.
- Model Type: Text-to-text language model.
- Precision: bfloat16, optimizing computational efficiency.
- Training Duration: 2 days on A100 40GB GPU.
- Fine-Tuning Method: Full fine-tuning.
This model is highly applicable in fields where tabular data needs to be queried and analyzed, such as financial reports, academic papers, and technical documentation.
Usage
Here’s an example of how to use this model for table-based question answering:
import torch
from transformers import pipeline
device = 0 if torch.cuda.is_available() else -1
table = '''\begin{tabular}{|c|c|c|}
\hline
Model & MAE-{$TKE$}-low & MAE-{$TKE$}-high\\
\hline
U-FNET & 0.0048 & $1.09 \times 10^{-5}$\\
\hline
\end{tabular}'''
question = "What is the MAE-TKE-high value for the U-FNET model?"
prompt = table + '\n' + question
model = pipeline("text-generation", "cmarkea/bloomz-3b-dpo-chat", device=device)
result = model(f"</s>{prompt}<s>", max_new_tokens=512)
print(result)
The model processes tables written in LaTeX format, so be sure to provide your tables in that form.
Performance
This model was evaluated on 200 question-answer pairs extracted from 100 tables in the table-vqa test set. Each table had two question-answer pairs: one in French and one in English.
The evaluation used the LLM-as-Juries method, employing three judge models (GPT-4o, Gemini1.5 Pro, and Claude 3.5-Sonnet). The scoring was adapted to the table QA context, with a scale from 0 to 5 to ensure precision in assessing the model’s performance.
Here’s a visualization of the results:
Citation
@online{AgDePaligemmaTableQALatex,
AUTHOR = {Tom Agonnoude, Cyrile Delestre},
URL = {https://huggingface.co/cmarkea/bloomz-3b-dpo-table-qa-latex},
YEAR = {2024},
KEYWORDS = {Table understanding, LaTeX, Multilingual, QA},
}
- Downloads last month
- 0
Model tree for cmarkea/bloomz-3b-dpo-table-qa-latex
Base model
cmarkea/bloomz-3b-dpo-chat