Tommy0303000
commited on
Commit
•
5a9dd9d
1
Parent(s):
ee5192d
Update README.md
Browse files
README.md
CHANGED
@@ -1,172 +1,83 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
tags: []
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
-
|
10 |
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
-
### Model Description
|
15 |
-
|
16 |
-
<!-- Provide a longer summary of what this model is. -->
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
29 |
|
30 |
-
|
31 |
|
32 |
-
- **Repository:**
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
-
### Downstream Use
|
47 |
|
48 |
-
|
49 |
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
|
62 |
-
[More Information Needed]
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
|
93 |
#### Training Hyperparameters
|
|
|
94 |
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
143 |
-
|
144 |
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
|
171 |
## Citation [optional]
|
172 |
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
tags: [LLM, fine-tuning, SQL, Llama 2, PEFT]
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
+
This model card presents the enhanced Llama 2 model, fine-tuned for SQL programming and deployed using the Yale High Performance Computing (HPC) platform. The project focuses on leveraging the computational power of Yale HPC to push the boundaries of what Large Language Models (LLMs) can achieve, specifically in the context of SQL programming.
|
|
|
9 |
|
10 |
|
11 |
## Model Details
|
12 |
|
|
|
|
|
|
|
13 |
|
14 |
+
### Model Description
|
15 |
|
16 |
+
This model aimed at advancing the capabilities of LLMs in programming languages, with a focus on SQL. The Llama 2 model, developed by Meta AI, serves as the foundation for this project. It has been fine-tuned using a Parameter Efficient Fine-Tuning (PEFT) approach, integrating techniques such as Low-Rank Adaptation (LoRA) and Retrieval Augmented Generation (RAG) to enhance its SQL programming assistance capabilities.
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
+
- **Developed by:** Kaifeng Gao, Jiayi Chen, Yuntian Liu, Yixiao Chen
|
19 |
+
- **Model type:** Large Language Model (Llama 2)
|
20 |
+
- **Language(s) (NLP):** English
|
21 |
+
- **License:** TBD
|
22 |
+
- **Finetuned from model:** Llama 2
|
23 |
|
24 |
+
### Model Sources
|
25 |
|
26 |
+
- **Repository:** https://github.com/Kaifeng-Gao/Llama-7b-HPC
|
|
|
|
|
27 |
|
28 |
## Uses
|
29 |
|
|
|
|
|
30 |
### Direct Use
|
31 |
|
32 |
+
The model is designed to assist developers in writing efficient and accurate SQL queries by providing contextually relevant suggestions and explanations. It can be directly used by SQL programmers of all skill levels to improve their query writing process.
|
33 |
|
|
|
34 |
|
35 |
+
### Downstream Use
|
36 |
|
37 |
+
The model can serve as a backend for educational tools, IDE plugins, or other applications that require SQL query generation or optimization.
|
38 |
|
|
|
39 |
|
40 |
### Out-of-Scope Use
|
41 |
|
42 |
+
Uses that involve tasks far removed from SQL programming or those that require real-time interaction with live databases may not be suitable.
|
43 |
|
|
|
44 |
|
45 |
## Bias, Risks, and Limitations
|
46 |
|
47 |
+
The model's performance and output quality are directly tied to the training dataset. As such, any biases or inaccuracies in the dataset could be reflected in the model's suggestions.
|
48 |
|
|
|
49 |
|
50 |
### Recommendations
|
51 |
|
52 |
+
Users should verify the model's suggestions against best practices and the latest SQL standards. Ongoing evaluation and refinement with updated datasets are recommended to mitigate biases and improve performance.
|
|
|
|
|
53 |
|
54 |
## How to Get Started with the Model
|
55 |
|
56 |
+
Refer to the project's GitHub repository for detailed instructions on deploying and interacting with the model through the Streamlit web application.
|
|
|
|
|
57 |
|
58 |
## Training Details
|
59 |
|
60 |
### Training Data
|
61 |
|
62 |
+
The model was fine-tuned using the "b-mc2/sql-create-context" dataset, enriched with SQL questions and their corresponding answers covering a broad range of concepts.
|
|
|
|
|
63 |
|
64 |
### Training Procedure
|
65 |
|
66 |
+
#### Preprocessing
|
67 |
+
The dataset was structured into a format conducive to efficient learning by employing templates that transformed raw data into a series of instructions and answers.
|
|
|
|
|
|
|
|
|
68 |
|
69 |
#### Training Hyperparameters
|
70 |
+
Training regime: The model leveraged LoRA for efficient adaptation, alongside model quantization for reduced memory footprint.
|
71 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
|
73 |
## Evaluation
|
74 |
|
75 |
+
Post fine-tuning and RAG integration, the model showed significant improvement in generating correct SQL syntax and providing comprehensive information from SQL tutorial websites. Some responses may contain minor syntax errors, attributed to the initial training dataset quality.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
|
77 |
## Environmental Impact
|
78 |
|
79 |
+
The use of Yale HPC resources aimed at efficient computational usage, though specific metrics on carbon emissions and electricity usage are pending further analysis.
|
80 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
|
82 |
## Citation [optional]
|
83 |
|