File size: 5,737 Bytes
30d617b
 
 
 
 
aad6ac8
30d617b
 
 
aad6ac8
30d617b
 
 
 
 
aad6ac8
30d617b
aad6ac8
30d617b
 
 
 
 
 
 
c80968e
 
fb33bc9
 
 
9b89859
c80968e
30d617b
 
 
 
 
 
aad6ac8
30d617b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aad6ac8
 
30d617b
 
 
aad6ac8
 
 
 
 
30d617b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c80968e
30d617b
 
 
 
 
 
c80968e
 
30d617b
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
license: apache-2.0  
inference: false  
---

# bling-phi-3

<!-- Provide a quick summary of what the model is/does. -->

bling-phi-3 is part of the BLING ("Best Little Instruct No-GPU") model series, RAG-instruct trained on top of a Microsoft Phi-3 base model.


### Benchmark Tests  

Evaluated against the benchmark test:   [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)  
1 Test Run (temperature=0.0, sample=False) with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.  

--**Accuracy Score**:  **99.5** correct out of 100  
--Not Found Classification:  95.0%  
--Boolean:  97.5%  
--Math/Logic:  80.0%  
--Complex Questions (1-5):  4 (Above Average - multiple-choice, causal)  
--Summarization Quality (1-5):  4 (Above Average)  
--Hallucinations:  No hallucinations observed in test runs.  

For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).   

Note: compare results with [bling-phi-2](https://www.huggingface.co/llmware/bling-phi-2-v0), and [dragon-mistral-7b](https://www.huggingface.co/llmware/dragon-mistral-7b-v0).    

Note: see also the quantized gguf version of the model- [bling-phi-3-gguf](https://www.huggingface.co/llmware/bling-phi-3-gguf).   

Note: the Pytorch version answered 1 question with "Not Found" while the quantized version answered it correctly, hence the small difference in scores.  

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** llmware
- **Model type:** bling
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Microsoft Phi-3

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

The intended use of BLING models is two-fold:

1.  Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow.

2.  BLING models are fine-tuned on top of leading base foundation models, generally in the 1-3B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases.


### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.  

BLING models have been trained for common RAG scenarios, specifically:   question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.


## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.


## How to Get Started with the Model

The fastest way to get started with BLING is through direct import in transformers:

    from transformers import AutoTokenizer, AutoModelForCausalLM  
    tokenizer = AutoTokenizer.from_pretrained("llmware/bling-phi-3", trust_remote_code=True)  
    model = AutoModelForCausalLM.from_pretrained("llmware/bling-phi-3", trust_remote_code=True)  

Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model.  The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.  

The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:

    full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"  

(As an aside, we intended to retire "human-bot" and tried several variations of the new Microsoft Phi-3 prompt template and ultimately had slightly better results with the very simple "human-bot" separators, so we opted to keep them.)  


The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:

1.  Text Passage Context, and
2.  Specific question or instruction based on the text passage

To get the best results, package "my_prompt" as follows:

    my_prompt = {{text_passage}} + "\n" + {{question/instruction}}


If you are using a HuggingFace generation script:

    # prepare prompt packaging used in fine-tuning process
    new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"

    inputs = tokenizer(new_prompt, return_tensors="pt")  
    start_of_output = len(inputs.input_ids[0])

    #   temperature: set at 0.0 with do_sample=False for consistency of output
    #   max_new_tokens:  set at 100 - may prematurely stop a few of the summaries

    outputs = model.generate(
            inputs.input_ids.to(device),
            eos_token_id=tokenizer.eos_token_id,
            pad_token_id=tokenizer.eos_token_id,
            do_sample=False,
            temperature=0.0,
            max_new_tokens=100,
            )

    output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)  


## Model Card Contact

Darren Oberst & llmware team