Edit model card

Original model card

Buy me a coffee if you like this project ;)

Description

GGUF Format model files for This project.

GGUF Specs

GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:

Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model.

inference

User: Tell me story about what is an quantization and what do we need to build.

Story: Once upon a time, in the magical land of Electronia, there were talented engineers who could create marvelous machines. These machines had the ability to manipulate sound waves, making them louder or softer, higher or lower in pitch, or even changing their shape altogether. But one day, they faced a new challenge that required them to quantize sound.

In Electronia, the people had grown tired of hearing unstructured and chaotic music. They yearned for a more organized and harmonious sound that would soothe their spirits and uplift their souls. The talented

Original model card

Downloads last month
25
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using s3nh/NexoNimbus-7B-GGUF 1