aashish1904's picture
Upload README.md with huggingface_hub
aa7975a verified
|
raw
history blame
4.19 kB
---
language:
- en
pipeline_tag: text-generation
tags:
- esper
- esper-2
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-8b
- llama-3
- llama-3-instruct
- llama-3-instruct-8b
- 8b
- code
- code-instruct
- python
- dev-ops
- terraform
- azure
- aws
- gcp
- architect
- engineer
- developer
- conversational
- chat
- instruct
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- sequelbox/Titanium
- sequelbox/Tachibana
- sequelbox/Supernova
model_type: llama
model-index:
- name: ValiantLabs/Llama3.1-8B-Esper2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-Shot)
type: Winogrande
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.85
name: acc
license: llama3.1
---
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
# QuantFactory/Llama3.1-8B-Esper2-GGUF
This is quantized version of [ValiantLabs/Llama3.1-8B-Esper2](https://huggingface.co/ValiantLabs/Llama3.1-8B-Esper2) created using llama.cpp
# Original Model Card
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/4I6oK8DG0so4VD8GroFsd.jpeg)
Esper 2 is a DevOps and cloud architecture code specialist built on Llama 3.1 8b.
- Expertise-driven, an AI assistant focused on AWS, Azure, GCP, Terraform, Dockerfiles, pipelines, shell scripts and more!
- Real world problem solving and high quality code instruct performance within the Llama 3.1 Instruct chat format
- Finetuned on synthetic [DevOps-instruct](https://huggingface.co/datasets/sequelbox/Titanium) and [code-instruct](https://huggingface.co/datasets/sequelbox/Tachibana) data generated with Llama 3.1 405b.
- Overall chat performance supplemented with [generalist chat data.](https://huggingface.co/datasets/sequelbox/Supernova)
Try our code-instruct AI assistant [Enigma!](https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma)
## Version
This is the **2024-10-02** release of Esper 2 for Llama 3.1 8b.
Esper 2 is now available for [Llama 3.2 3b!](https://huggingface.co/ValiantLabs/Llama3.2-3B-Esper2)
Esper 2 will be coming to more model sizes soon :)
## Prompting Guide
Esper 2 uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat:
```python
import transformers
import torch
model_id = "ValiantLabs/Llama3.1-8B-Esper2"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an AI assistant."},
{"role": "user", "content": "Hi, how do I optimize the size of a Docker image?"}
]
outputs = pipeline(
messages,
max_new_tokens=2048,
)
print(outputs[0]["generated_text"][-1])
```
## The Model
Esper 2 is built on top of Llama 3.1 8b Instruct, improving performance through high quality DevOps, code, and chat data in Llama 3.1 Instruct prompt style.
Our current version of Esper 2 is trained on DevOps data from [sequelbox/Titanium](https://huggingface.co/datasets/sequelbox/Titanium), supplemented by code-instruct data from [sequelbox/Tachibana](https://huggingface.co/datasets/sequelbox/Tachibana) and general chat data from [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova)
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
Esper 2 is created by [Valiant Labs.](http://valiantlabs.ca/)
[Check out our HuggingFace page for Shining Valiant 2 Enigma, and our other Build Tools models for creators!](https://huggingface.co/ValiantLabs)
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models.