Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Ambari-7B-Instruct-v0.1

Overview

Ambari-7B-Instruct-v0.1 is an extension of the Ambari series, a bilingual English/Kannada model developed and released by Cognitivelab.in. This model is specialized for natural language understanding tasks, particularly in the context of instructional pairs. It is built upon the Ambari-7B-Base-v0.1 model, utilizing a fine-tuning process with a curated dataset of translated instructional pairs.

Usage

To use the Ambari-7B-Instruct-v0.1 model, you can follow the example code below:

# Usage
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM

model = LlamaForCausalLM.from_pretrained('Cognitive-Lab/Ambari-7B-Instruct-v0.1')
tokenizer = LlamaTokenizer.from_pretrained('Cognitive-Lab/Ambari-7B-Instruct-v0.1')

prompt = "Give me 10 Study tips in Kannada."
inputs = tokenizer(prompt, return_tensors="pt")

# Generate
generate_ids = model.generate(inputs.input_ids, max_length=1000)
decoded_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]

print(decoded_output)

Learn More

Read more about Ambari-7B-Instruct-v0.1 and its applications in natural language understanding tasks on the Cognitivelab.in blog.

Dataset Information

The model is fine-tuned using the Kannada Instruct Dataset, a collection of translated instructional pairs. The dataset includes English instruction and output pairs, as well as their corresponding translations in Kannada. The intentional diversification of the dataset, encompassing various language combinations, enhances the model's proficiency in cross-lingual tasks.

Bilingual Instruct Fine-tuning

The model underwent a pivotal stage of supervised fine-tuning with low-rank adaptation, focusing on bilingual instruct fine-tuning. This approach involved training the model to respond adeptly in either English or Kannada based on the language specified in the user prompt or instruction.

References

Downloads last month
15
Safetensors
Model size
6.88B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Cognitive-Lab/Ambari-7B-Instruct-v0.1

Collection including Cognitive-Lab/Ambari-7B-Instruct-v0.1