aashish1904's picture
Upload README.md with huggingface_hub
aa7975a verified
metadata
language:
  - en
pipeline_tag: text-generation
tags:
  - esper
  - esper-2
  - valiant
  - valiant-labs
  - llama
  - llama-3.1
  - llama-3.1-instruct
  - llama-3.1-instruct-8b
  - llama-3
  - llama-3-instruct
  - llama-3-instruct-8b
  - 8b
  - code
  - code-instruct
  - python
  - dev-ops
  - terraform
  - azure
  - aws
  - gcp
  - architect
  - engineer
  - developer
  - conversational
  - chat
  - instruct
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
  - sequelbox/Titanium
  - sequelbox/Tachibana
  - sequelbox/Supernova
model_type: llama
model-index:
  - name: ValiantLabs/Llama3.1-8B-Esper2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-Shot)
          type: Winogrande
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 75.85
            name: acc
license: llama3.1

QuantFactory Banner

QuantFactory/Llama3.1-8B-Esper2-GGUF

This is quantized version of ValiantLabs/Llama3.1-8B-Esper2 created using llama.cpp

Original Model Card

image/jpeg

Esper 2 is a DevOps and cloud architecture code specialist built on Llama 3.1 8b.

  • Expertise-driven, an AI assistant focused on AWS, Azure, GCP, Terraform, Dockerfiles, pipelines, shell scripts and more!
  • Real world problem solving and high quality code instruct performance within the Llama 3.1 Instruct chat format
  • Finetuned on synthetic DevOps-instruct and code-instruct data generated with Llama 3.1 405b.
  • Overall chat performance supplemented with generalist chat data.

Try our code-instruct AI assistant Enigma!

Version

This is the 2024-10-02 release of Esper 2 for Llama 3.1 8b.

Esper 2 is now available for Llama 3.2 3b!

Esper 2 will be coming to more model sizes soon :)

Prompting Guide

Esper 2 uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat:

import transformers
import torch

model_id = "ValiantLabs/Llama3.1-8B-Esper2"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are an AI assistant."},
    {"role": "user", "content": "Hi, how do I optimize the size of a Docker image?"}
]

outputs = pipeline(
    messages,
    max_new_tokens=2048,
)

print(outputs[0]["generated_text"][-1])

The Model

Esper 2 is built on top of Llama 3.1 8b Instruct, improving performance through high quality DevOps, code, and chat data in Llama 3.1 Instruct prompt style.

Our current version of Esper 2 is trained on DevOps data from sequelbox/Titanium, supplemented by code-instruct data from sequelbox/Tachibana and general chat data from sequelbox/Supernova.

image/jpeg

Esper 2 is created by Valiant Labs.

Check out our HuggingFace page for Shining Valiant 2 Enigma, and our other Build Tools models for creators!

Follow us on X for updates on our models!

We care about open source. For everyone to use.

We encourage others to finetune further from our models.