Edit model card

This is the HuggingFace model release of our paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense".

Paper and Github Repository

Paper: https://arxiv.org/abs/2303.13408
Code: https://github.com/martiansideofthemoon/ai-detection-paraphrases
Usage instructions: https://github.com/martiansideofthemoon/ai-detection-paraphrases#running-the-paraphraser-model-dipper

What is DIPPER?

DIPPER ("Discourse Paraphraser") is a 11B parameter paraphrase generation model built by fine-tuning T5-XXL. DIPPER possesses two unique features that help its outputs evade AI-generated text detectors:

  • Paraphrasing long-form text in context: Most modern paraphrasers are exclusively trained on sentence-level data, ignoring discourse-level information. However, many critical use cases of LLMs involve generating long-form text in responses to detailed userspecified prompts. Thus, we train DIPPER to paraphrase paragraph-length texts, re-order content, and optionally leverage context such as input prompts.

  • Controlling output diversity: Another weakness of existing paraphrasers is that they lack an easy way to control output diversity. An attacker may want to apply just the minimum amount of lexical and syntactic modifications necessary to evade a detection algorithm. DIPPER provides users with two intuitive scalar control knobs at inference time that are trained end-to-end: one controls the lexical diversity of the paraphrase, and the other controls the amount of content re-ordering.

We leverage the PAR3 dataset publicly released by Thai et al. (2022) to train DIPPER. This dataset contains multiple translations of non-English novels into English aligned at a paragraph level (e.g., it contains both the Henry Morley and Robert Adams translations of Voltaire’s Candide), which we treat as paragraphlevel paraphrases and use to train our paraphraser.

Using DIPPER

Full instructions: https://github.com/martiansideofthemoon/ai-detection-paraphrases#running-the-paraphraser-model-dipper

We suggest using the code below to use the model correctly:

class DipperParaphraser(object):
    def __init__(self, model="kalpeshk2011/dipper-paraphraser-xxl", verbose=True):
        time1 = time.time()
        self.tokenizer = T5Tokenizer.from_pretrained('google/t5-v1_1-xxl')
        self.model = T5ForConditionalGeneration.from_pretrained(model)
        if verbose:
            print(f"{model} model loaded in {time.time() - time1}")
        self.model.cuda()
        self.model.eval()

    def paraphrase(self, input_text, lex_diversity, order_diversity, prefix="", sent_interval=3, **kwargs):
        """Paraphrase a text using the DIPPER model.

        Args:
            input_text (str): The text to paraphrase. Make sure to mark the sentence to be paraphrased between <sent> and </sent> blocks, keeping space on either side.
            lex_diversity (int): The lexical diversity of the output, choose multiples of 20 from 0 to 100. 0 means no diversity, 100 means maximum diversity.
            order_diversity (int): The order diversity of the output, choose multiples of 20 from 0 to 100. 0 means no diversity, 100 means maximum diversity.
            **kwargs: Additional keyword arguments like top_p, top_k, max_length.
        """
        assert lex_diversity in [0, 20, 40, 60, 80, 100], "Lexical diversity must be one of 0, 20, 40, 60, 80, 100."
        assert order_diversity in [0, 20, 40, 60, 80, 100], "Order diversity must be one of 0, 20, 40, 60, 80, 100."

        lex_code = int(100 - lex_diversity)
        order_code = int(100 - order_diversity)

        input_text = " ".join(input_text.split())
        sentences = sent_tokenize(input_text)
        prefix = " ".join(prefix.replace("\n", " ").split())
        output_text = ""

        for sent_idx in range(0, len(sentences), sent_interval):
            curr_sent_window = " ".join(sentences[sent_idx:sent_idx + sent_interval])
            final_input_text = f"lexical = {lex_code}, order = {order_code}"
            if prefix:
                final_input_text += f" {prefix}"
            final_input_text += f" <sent> {curr_sent_window} </sent>"

            final_input = self.tokenizer([final_input_text], return_tensors="pt")
            final_input = {k: v.cuda() for k, v in final_input.items()}

            with torch.inference_mode():
                outputs = self.model.generate(**final_input, **kwargs)
            outputs = self.tokenizer.batch_decode(outputs, skip_special_tokens=True)
            prefix += " " + outputs[0]
            output_text += " " + outputs[0]

        return output_text

if __name__ == "__main__":
    dp = DipperParaphraser()

    prompt = "In a shocking finding, scientist discovered a herd of unicorns living in a remote valley."
    input_text = "They have never been known to mingle with humans. Today, it is believed these unicorns live in an unspoilt environment which is surrounded by mountains. Its edge is protected by a thick wattle of wattle trees, giving it a majestic appearance. Along with their so-called miracle of multicolored coat, their golden coloured feather makes them look like mirages. Some of them are rumored to be capable of speaking a large amount of different languages. They feed on elk and goats as they were selected from those animals that possess a fierceness to them, and can \"eat\" them with their long horns."

    print(f"Input = {prompt} <sent> {input_text} </sent>\n")
    output_l60_sample = dp.paraphrase(input_text, lex_diversity=60, order_diversity=0, prefix=prompt, do_sample=True, top_p=0.75, top_k=None, max_length=512)
    print(f"Output (Lexical diversity = 60, Sample p = 0.75) = {output_l60_sample}\n")
Downloads last month
1,519
Safetensors
Model size
11.3B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using kalpeshk2011/dipper-paraphraser-xxl 1