Added FAQ
Browse files- app.py +3 -0
- contents.py +15 -0
app.py
CHANGED
@@ -17,6 +17,7 @@ from contents import (
|
|
17 |
subtitle,
|
18 |
title,
|
19 |
powered_by,
|
|
|
20 |
)
|
21 |
from gradio_highlightedtextbox import HighlightedTextbox
|
22 |
from gradio_modal import Modal
|
@@ -493,6 +494,8 @@ with gr.Blocks(css=custom_css) as demo:
|
|
493 |
with gr.Tab("π§ Usage Guide"):
|
494 |
gr.Markdown(how_to_use)
|
495 |
gr.Markdown(example_explanation)
|
|
|
|
|
496 |
with gr.Tab("π Citing PECoRe"):
|
497 |
gr.Markdown("To refer to the PECoRe framework for context usage detection, cite:")
|
498 |
gr.Code(pecore_citation, interactive=False, label="PECoRe (Sarti et al., 2024)")
|
|
|
17 |
subtitle,
|
18 |
title,
|
19 |
powered_by,
|
20 |
+
faq,
|
21 |
)
|
22 |
from gradio_highlightedtextbox import HighlightedTextbox
|
23 |
from gradio_modal import Modal
|
|
|
494 |
with gr.Tab("π§ Usage Guide"):
|
495 |
gr.Markdown(how_to_use)
|
496 |
gr.Markdown(example_explanation)
|
497 |
+
with gr.Tab("β FAQ"):
|
498 |
+
gr.Markdown(faq)
|
499 |
with gr.Tab("π Citing PECoRe"):
|
500 |
gr.Markdown("To refer to the PECoRe framework for context usage detection, cite:")
|
501 |
gr.Code(pecore_citation, interactive=False, label="PECoRe (Sarti et al., 2024)")
|
contents.py
CHANGED
@@ -45,6 +45,7 @@ example_explanation = """
|
|
45 |
<p>Consider the following example, showing inputs and outputs of the <a href='https://huggingface.co/gsarti/cora_mgen' target='_blank'>CORA Multilingual QA</a> model provided as default in the interface, using default settings.</p>
|
46 |
<img src="file/img/pecore_ui_output_example.png" width=100% />
|
47 |
<p>The PECoRe CTI step identified two context-sensitive tokens in the generation (<code>287</code> and <code>,</code>), while the CCI step associated each of those with the most influential tokens in the context. It can be observed that in both cases the matching tokens stating the number of inhabitants are identified as salient (<code>,</code> and <code>287</code> for the generated <code>287</code>, while <code>235</code> is also found salient for the generated <code>,</code>). In this case, the influential context found by PECoRe is lexically equal to the generated output, but in principle better LMs might not use their inputs verbatim, hence the interest for using model internals with PECoRe.</p>
|
|
|
48 |
<h2>Usage tips</h3>
|
49 |
<ol>
|
50 |
<li>The <code>π Download output</code> button allows you to download the full JSON output produced by the Inseq CLI. It includes, among other things, the full set of CTI and CCI scores produced by PECoRe, tokenized versions of the input context and generated output and the full arguments used for the CLI call.</li>
|
@@ -62,6 +63,20 @@ show_code_modal = """
|
|
62 |
<p>The snippets provided below are updated based on the current parameter configuration of the demo, and allow you to use Python and Shell code to call the Inseq CLI. <b>We recommend using the Python version for repeated evaluation, since it allows for model-preloading.</b></p>
|
63 |
"""
|
64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
pecore_citation = """@inproceedings{sarti-etal-2023-quantifying,
|
66 |
title = "Quantifying the Plausibility of Context Reliance in Neural Machine Translation",
|
67 |
author = "Sarti, Gabriele and
|
|
|
45 |
<p>Consider the following example, showing inputs and outputs of the <a href='https://huggingface.co/gsarti/cora_mgen' target='_blank'>CORA Multilingual QA</a> model provided as default in the interface, using default settings.</p>
|
46 |
<img src="file/img/pecore_ui_output_example.png" width=100% />
|
47 |
<p>The PECoRe CTI step identified two context-sensitive tokens in the generation (<code>287</code> and <code>,</code>), while the CCI step associated each of those with the most influential tokens in the context. It can be observed that in both cases the matching tokens stating the number of inhabitants are identified as salient (<code>,</code> and <code>287</code> for the generated <code>287</code>, while <code>235</code> is also found salient for the generated <code>,</code>). In this case, the influential context found by PECoRe is lexically equal to the generated output, but in principle better LMs might not use their inputs verbatim, hence the interest for using model internals with PECoRe.</p>
|
48 |
+
<p>"Why wasn't <code>235</code> found as context-sensitive, when it intuitively is?" you might ask. In this case, it's due to the generation being quite short, which makes its CTI score less salient than those of other tokens. The permissivness of result selection is an adjustable parameter (see points below).</p>
|
49 |
<h2>Usage tips</h3>
|
50 |
<ol>
|
51 |
<li>The <code>π Download output</code> button allows you to download the full JSON output produced by the Inseq CLI. It includes, among other things, the full set of CTI and CCI scores produced by PECoRe, tokenized versions of the input context and generated output and the full arguments used for the CLI call.</li>
|
|
|
63 |
<p>The snippets provided below are updated based on the current parameter configuration of the demo, and allow you to use Python and Shell code to call the Inseq CLI. <b>We recommend using the Python version for repeated evaluation, since it allows for model-preloading.</b></p>
|
64 |
"""
|
65 |
|
66 |
+
faq = """
|
67 |
+
<h2>β FAQ</h2>
|
68 |
+
<p><b>Q: Why should I use PECoRe rather than <a href="https://docs.llamaindex.ai/en/stable/examples/query_engine/citation_query_engine.html" target="_blank">lexical/semantic matching</a>, <a href="https://arxiv.org/abs/2204.04991" target="_blank">NLI</a> or <a href="https://js.langchain.com/docs/use_cases/question_answering/citations" target="_blank">citation prompting</a> for attributing model generation?</b></p>
|
69 |
+
<p>A: The main difference concerns <b>faithfulness</b>: all these techniques rely on different forms of surface-level matching to produce plausible citations, but do not guarantee that the model is actually using such information during generation. PECoRe does guarantee a variable degree of faithfulness to model inner workings, depending on the CTI/CCI metrics used.</p>
|
70 |
+
<p><b>Q: Can PECoRe be used for my task?</b></p>
|
71 |
+
<p>A: PECoRe is designed to be task-agnostic, and can be used with any generative language model for tasks where a division where a contextual component can clearly be identified in the input (e.g. retrieved paragraphs in RAG) or the output (e.g. reasoning steps in chain-of-thought prompting). The current Inseq implementation supports only text as a modality, but conceptually the PECoRe framework can easily be extended to attribute multimodal context components.</p>
|
72 |
+
<p><b>Q: What are the main limitations of PECoRe?</b></p>
|
73 |
+
<p>A: PECoRe is limited by the need for a present/absent context (either in the input or in the output) for contrastive comparison, and by the choice of parameters (especially results selection ones) that can require specific tuning for different models and tasks.</p>
|
74 |
+
<br>
|
75 |
+
<h3>βοΈ Technical matters</h3>
|
76 |
+
<p><b>Q: Why is it important to separate <code>{context}</code> and <code>{current}</code> tags from other tokens with whitespace in input/output templates?</b></p>
|
77 |
+
<p>A: Taking the default CORA template <code><Q>: {current} <P>: {context}</code> as an example, the whitespace after <code>:</code> for both tags serves the purpose of ensuring that, when tokenized in isolation, the same token will be used in both cases. If this wasn't present, you might end up having e.g. <code>Test</code> for the full tokenization (as no whitespace precedes it) and <code>βTest</code> for the partial one (as initial tokens are always prefixed with <code>β</code> in SentencePiece). This might succeed but produce unexpected results if both option are tokenized with the same number of tokens, or fail altogether if the number of tokens for the space-prefixed and the spaceless version differs. Note that this is not necessary if the template includes simply the tag itself (e.g. <code>{current}</code>)</p>
|
78 |
+
"""
|
79 |
+
|
80 |
pecore_citation = """@inproceedings{sarti-etal-2023-quantifying,
|
81 |
title = "Quantifying the Plausibility of Context Reliance in Neural Machine Translation",
|
82 |
author = "Sarti, Gabriele and
|