Dataset Preview
Full Screen Viewer
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 2 new columns ({'weight_type', 'base_model'}) and 8 missing columns ({'compute_dtype', 'hardware', 'scripts', 'model_params', 'model_size', 'gguf_ftype', 'quant_type', 'weight_dtype'}). This happened while the json dataset builder was generating data using hf://datasets/Intel/ld_requests/Intel/neural-chat-7b-v3-1_eval_request_False_bfloat16_Original.json (at revision 8e762160a7a1b7aefe349ebe48d79c0b7dad9746) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast model: string base_model: string revision: string private: bool precision: string params: double architectures: string weight_type: string status: string submitted_time: timestamp[s] model_type: string job_id: int64 job_start_time: timestamp[s] to {'model': Value(dtype='string', id=None), 'revision': Value(dtype='string', id=None), 'private': Value(dtype='bool', id=None), 'params': Value(dtype='float64', id=None), 'architectures': Value(dtype='string', id=None), 'quant_type': Value(dtype='string', id=None), 'precision': Value(dtype='string', id=None), 'model_params': Value(dtype='float64', id=None), 'model_size': Value(dtype='float64', id=None), 'weight_dtype': Value(dtype='string', id=None), 'compute_dtype': Value(dtype='string', id=None), 'gguf_ftype': Value(dtype='string', id=None), 'hardware': Value(dtype='string', id=None), 'status': Value(dtype='string', id=None), 'submitted_time': Value(dtype='timestamp[s]', id=None), 'model_type': Value(dtype='string', id=None), 'job_id': Value(dtype='int64', id=None), 'job_start_time': Value(dtype='null', id=None), 'scripts': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 2 new columns ({'weight_type', 'base_model'}) and 8 missing columns ({'compute_dtype', 'hardware', 'scripts', 'model_params', 'model_size', 'gguf_ftype', 'quant_type', 'weight_dtype'}). This happened while the json dataset builder was generating data using hf://datasets/Intel/ld_requests/Intel/neural-chat-7b-v3-1_eval_request_False_bfloat16_Original.json (at revision 8e762160a7a1b7aefe349ebe48d79c0b7dad9746) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
model
string | revision
string | private
bool | params
float64 | architectures
string | quant_type
string | precision
string | model_params
float64 | model_size
float64 | weight_dtype
string | compute_dtype
string | gguf_ftype
string | hardware
string | status
string | submitted_time
timestamp[us] | model_type
string | job_id
int64 | job_start_time
null | scripts
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1-4bit-awq | main | false | 5.73 | LlamaForCausalLM | AWQ | 4bit | 7.03 | 5.73 | int4 | All | *Q4_0.gguf | gpu | Pending | 2024-05-25T21:56:42 | quantization | -1 | null | ITREX |
ISTA-DASLab/Llama-2-70b-AQLM-2Bit-1x16-hf | main | false | null | LlamaForCausalLM | AQLM | 2bit | null | null | int2 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-15T03:10:03 | quantization | -1 | null | ITREX |
ISTA-DASLab/Llama-2-7b-AQLM-2Bit-1x16-hf | main | false | 2.38 | LlamaForCausalLM | AQLM | 2bit | 6.48 | 2.38 | int2 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-15T03:44:59 | quantization | -1 | null | ITREX |
ISTA-DASLab/Llama-2-7b-AQLM-2Bit-8x8-hf | main | false | 2.73 | LlamaForCausalLM | AQLM | 2bit | 6.48 | 2.73 | int2 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-15T03:43:56 | quantization | -1 | null | ITREX |
ISTA-DASLab/Llama-3-8B-Instruct-GPTQ-4bit | main | false | 5.74 | LlamaForCausalLM | GPTQ | 4bit | 7.04 | 5.74 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-16T08:11:55 | quantization | -1 | null | ITREX |
ISTA-DASLab/Meta-Llama-3-70B-AQLM-2Bit-1x16 | main | false | 21.92 | LlamaForCausalLM | AQLM | 2bit | 68.45 | 21.92 | int2 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-20T14:41:26 | quantization | -1 | null | ITREX |
ISTA-DASLab/Mixtral-8x7B-Instruct-v0_1-AQLM-2Bit-1x16-hf | main | false | 13.09 | MixtralForCausalLM | AQLM | 2bit | 46.44 | 13.09 | int2 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-18T19:19:56 | quantization | -1 | null | ITREX |
ISTA-DASLab/SOLAR-10.7B-Instruct-v1.0-GPTQ-4bit | main | false | 5.98 | LlamaForCausalLM | GPTQ | 4bit | 10.57 | 5.98 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-20T10:16:50 | quantization | -1 | null | ITREX |
Intel/Qwen2-57B-A14B-Instruct-int4-inc | main | false | 31.44 | Qwen2MoeForCausalLM | AutoRound | 4bit | 15.72 | 31.44 | int4 | bfloat16 | *Q4_0.gguf | gpu | Pending | 2024-10-23T10:26:49 | quantization | -1 | null | ITREX |
Intel/Qwen2.5-0.5B-Instruct-int4-inc-private | main | false | 0.46 | Qwen2ForCausalLM | AutoRound | 4bit | 0.92 | 0.46 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-10-09T05:53:31 | quantization | -1 | null | ITREX |
Intel/Qwen2.5-0.5B-Instruct-int4-inc-private | main | false | 0.46 | Qwen2ForCausalLM | null | 16bit | 0.18 | 0.46 | float16 | float16 | *Q4_0.gguf | gpu | Pending | 2024-10-08T07:56:12 | original | -1 | null | ITREX |
Intel/Qwen2.5-1.5B-Instruct-int4-inc-private | main | false | 1.15 | Qwen2ForCausalLM | AutoRound | 4bit | 2.3 | 1.15 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-10-09T05:53:59 | quantization | -1 | null | ITREX |
Intel/Qwen2.5-14B-Instruct-int4-inc-private | main | false | 9.99 | Qwen2ForCausalLM | null | 16bit | 3.33 | 9.99 | float16 | float16 | *Q4_0.gguf | gpu | Pending | 2024-10-08T10:51:10 | original | -1 | null | ITREX |
Intel/Qwen2.5-32B-Instruct-int4-inc-private | main | false | 19.34 | Qwen2ForCausalLM | AutoRound | 4bit | 9.67 | 19.34 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-10-09T07:24:26 | quantization | -1 | null | ITREX |
Intel/Qwen2.5-3B-Instruct-int4-inc-private | main | false | 2.07 | Qwen2ForCausalLM | AutoRound | 4bit | 4.14 | 2.07 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-10-09T05:52:56 | quantization | -1 | null | ITREX |
Intel/Qwen2.5-72B-Instruct-int4-inc-private | main | false | 41.49 | Qwen2ForCausalLM | AutoRound | 4bit | 20.74 | 41.49 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-10-09T04:37:32 | quantization | -1 | null | ITREX |
Intel/Qwen2.5-72B-Instruct-int4-inc-private | main | false | 41.49 | Qwen2ForCausalLM | null | 16bit | 11.89 | 41.49 | float16 | float16 | *Q4_0.gguf | gpu | Pending | 2024-10-09T01:08:16 | original | -1 | null | ITREX |
Intel/Qwen2.5-7B-Instruct-int4-inc-private | main | false | 5.58 | Qwen2ForCausalLM | AutoRound | 4bit | 2.79 | 5.58 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-10-09T06:26:21 | quantization | -1 | null | ITREX |
Intel/falcon-7b-int4-inc | main | false | 4.76 | FalconForCausalLM | GPTQ | 4bit | 6.74 | 4.76 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-28T15:45:53 | quantization | -1 | null | ITREX |
Intel/gemma-2b-int4-inc | main | false | 3.13 | GemmaForCausalLM | GPTQ | 4bit | 2 | 3.13 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-28T15:48:10 | quantization | -1 | null | ITREX |
Intel/neural-chat-7b-v3-1 | main | false | 7.242 | MistralForCausalLM | null | bfloat16 | null | null | null | null | null | null | RUNNING | 2024-04-08T08:42:16 | πΆ : fine-tuned on domain-specific datasets | 6 | null | null |
Intel/phi-2-int4-inc | main | false | 1.84 | PhiForCausalLM | GPTQ | 4bit | 2.54 | 1.84 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-28T15:43:10 | quantization | -1 | null | ITREX |
Nan-Do/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-GGUF | main | false | null | ? | llama.cpp | 4bit | null | null | int4 | bfloat16 | *Q4_0.gguf | cpu | Pending | 2024-05-18T02:32:17 | quantization | -1 | null | llama_cpp |
PawanKrd/Meta-Llama-3-8B-Instruct-GGUF | main | false | null | ? | llama.cpp | 4bit | null | null | int4 | float16 | *Q4_0.gguf | cpu | Pending | 2024-05-09T12:57:56 | quantization | -1 | null | llama_cpp |
PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed | main | false | null | ? | llama.cpp | 4bit | null | null | int4 | float16 | *Q4_0.gguf | cpu | Pending | 2024-05-10T07:32:46 | quantization | -1 | null | llama_cpp |
Qwen/Qwen2.5-14B-Instruct-AWQ | main | false | 9.98 | Qwen2ForCausalLM | AWQ | 4bit | 13.32 | 9.98 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-10-30T02:47:38 | quantization | -1 | null | ITREX |
Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4 | main | false | 41.62 | Qwen2ForCausalLM | GPTQ | 4bit | 71.07 | 41.62 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-11-04T04:16:26 | quantization | -1 | null | ITREX |
Ramikan-BR/tinyllama-coder-py-4bit-v10 | da5637d | false | 2.2 | LlamaForCausalLM | null | 16bit | 1.1 | 2.2 | float16 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-28T21:43:42 | original | -1 | null | ITREX |
Ramikan-BR/tinyllama-coder-py-v11 | f1b5e2e | false | 2.2 | LlamaForCausalLM | null | 16bit | 1.1 | 2.2 | float16 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-28T21:45:09 | original | -1 | null | ITREX |
Ramikan-BR/tinyllama-coder-py-v12 | abd0469 | false | 2.2 | LlamaForCausalLM | null | 16bit | 1.1 | 2.2 | float16 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-28T21:46:32 | original | -1 | null | ITREX |
TheBloke/Falcon-7B-Instruct-GPTQ | main | false | 5.94 | RWForCausalLM | GPTQ | 4bit | 6.74 | 5.94 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-10T06:13:37 | quantization | -1 | null | ITREX |
TheBloke/Llama-2-13B-chat-AWQ | main | false | 7.25 | LlamaForCausalLM | AWQ | 4bit | 12.79 | 7.25 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-10T07:46:17 | quantization | -1 | null | ITREX |
TheBloke/Llama-2-13B-chat-GPTQ | main | false | 7.26 | LlamaForCausalLM | GPTQ | 4bit | 12.8 | 7.26 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-10T07:50:09 | quantization | -1 | null | ITREX |
TheBloke/Mistral-7B-Instruct-v0.2-AWQ | main | false | 4.15 | MistralForCausalLM | AWQ | 4bit | 7.03 | 4.15 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-09T02:54:02 | quantization | -1 | null | ITREX |
TheBloke/Mistral-7B-Instruct-v0.2-GPTQ | main | false | 4.16 | MistralForCausalLM | GPTQ | 4bit | 7.04 | 4.16 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-10T05:47:33 | quantization | -1 | null | ITREX |
TheBloke/Mixtral-8x7B-Instruct-v0.1-AWQ | main | false | 24.65 | MixtralForCausalLM | AWQ | 4bit | 46.8 | 24.65 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-10T06:21:34 | quantization | -1 | null | ITREX |
TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ | main | false | 23.81 | MixtralForCausalLM | GPTQ | 4bit | 46.5 | 23.81 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-13T11:54:45 | quantization | -1 | null | ITREX |
TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ | main | false | 5.98 | LlamaForCausalLM | GPTQ | 4bit | 10.57 | 5.98 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-09T09:03:57 | quantization | -1 | null | ITREX |
astronomer/Llama-3-8B-Instruct-GPTQ-4-Bit | main | false | 5.74 | LlamaForCausalLM | GPTQ | 4bit | 7.04 | 5.74 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-10T04:42:46 | quantization | -1 | null | ITREX |
casperhansen/falcon-7b-awq | main | false | 4.16 | RWForCausalLM | AWQ | 4bit | 8.33 | 4.16 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-10T06:47:20 | quantization | -1 | null | ITREX |
crusoeai/Llama-3-8B-Instruct-Gradient-1048k-GGUF | main | false | null | ? | llama.cpp | 4bit | null | null | int4 | int8 | *Q4_0.gguf | cpu | Pending | 2024-05-11T17:37:21 | quantization | -1 | null | llama_cpp |
cstr/Spaetzle-v60-7b-Q4_0-GGUF | main | false | null | ? | llama.cpp | 4bit | null | null | int4 | ? | *Q4_0.gguf | cpu | Pending | 2024-05-11T07:32:05 | quantization | -1 | null | llama_cpp |
cstr/Spaetzle-v60-7b-int4-inc | main | false | 4.16 | MistralForCausalLM | GPTQ | 4bit | 7.04 | 4.16 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-11T11:55:16 | quantization | -1 | null | ITREX |
cstr/llama3-8b-spaetzle-v20-int4-inc | main | false | 5.74 | LlamaForCausalLM | GPTQ | 4bit | 7.04 | 5.74 | int4 | All | *Q4_0.gguf | gpu | Pending | 2024-05-18T10:08:36 | quantization | -1 | null | ITREX |
cstr/llama3-8b-spaetzle-v33-int4-inc | main | false | 5.74 | LlamaForCausalLM | GPTQ | 4bit | 7.04 | 5.74 | int4 | bfloat16 | *Q4_0.gguf | gpu | Pending | 2024-05-28T13:12:49 | quantization | -1 | null | ITREX |
cstr/llama3-8b-spaetzle-v33-int4-inc | main | false | 5.74 | LlamaForCausalLM | GPTQ | 4bit | 7.04 | 5.74 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-28T16:18:54 | quantization | -1 | null | ITREX |
facebook/opt-1.3b | main | false | 1.3 | OPTForCausalLM | null | bfloat16 | null | null | null | null | null | null | FINISHED | 2024-04-10T11:23:43 | π’ : pretrained | 2 | null | null |
facebook/opt-125m | main | false | 0.125 | OPTForCausalLM | null | bfloat16 | null | null | null | null | null | null | FINISHED | 2024-04-10T12:05:21 | π’ : pretrained | 3 | null | null |
facebook/opt-350m | main | false | 0.35 | OPTForCausalLM | Rtn | 8bit | null | null | int8 | null | null | null | FINISHED | 2024-04-11T05:48:05 | π’ : pretrained | 5 | null | null |
facebook/opt-350m | main | false | 0.35 | OPTForCausalLM | Rtn | 8bit | null | null | int8 | null | null | null | FINISHED | 2024-04-11T05:48:05 | π’ : pretrained | 6 | null | null |
facebook/opt-350m | main | false | 0.35 | OPTForCausalLM | null | bfloat16 | null | null | null | null | null | null | FINISHED | 2024-04-10T13:12:22 | π’ : pretrained | 4 | null | null |
islam23/llama3-8b-RAG_News_Finance | main | false | null | LlamaForCausalLM | bitsandbytes | 4bit | null | null | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-15T16:46:01 | quantization | -1 | null | ITREX |
leliuga/Llama-2-13b-chat-hf-bnb-4bit | main | false | 7.2 | LlamaForCausalLM | bitsandbytes | 4bit | 13.08 | 7.2 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-10T07:47:50 | quantization | -1 | null | ITREX |
leliuga/Phi-3-mini-128k-instruct-bnb-4bit | main | false | 2.26 | Phi3ForCausalLM | bitsandbytes | 4bit | 3.74 | 2.26 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-10T07:32:00 | quantization | -1 | null | ITREX |
lodrick-the-lafted/Olethros-8B-AWQ | main | false | 5.73 | LlamaForCausalLM | AWQ | 4bit | 7.03 | 5.73 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-05-11T18:47:40 | quantization | -1 | null | ITREX |
mayflowergmbh/occiglot-7b-de-en-instruct-AWQ | main | false | 4.15 | MistralForCausalLM | AWQ | 4bit | 7.03 | 4.15 | int4 | All | *Q4_0.gguf | gpu | Pending | 2024-05-25T21:58:36 | quantization | -1 | null | ITREX |
noxinc/phi-3-portuguese-tom-cat-4k-instruct-Q4_0-GGUF-PTBR | main | false | null | ? | llama.cpp | 4bit | null | null | int4 | float16 | *Q4_0.gguf | cpu | Pending | 2024-05-20T04:01:34 | quantization | -1 | null | llama_cpp |
tclf90/qwen2.5-72b-instruct-gptq-int3 | main | false | 32.76 | Qwen2ForCausalLM | GPTQ | 3bit | 79.97 | 32.76 | int3 | float16 | *Q4_0.gguf | gpu | Pending | 2024-11-04T04:36:57 | quantization | -1 | null | ITREX |
tclf90/qwen2.5-72b-instruct-gptq-int4 | main | false | 41.63 | Qwen2ForCausalLM | GPTQ | 4bit | 71.07 | 41.63 | int4 | float16 | *Q4_0.gguf | gpu | Pending | 2024-11-04T04:32:14 | quantization | -1 | null | ITREX |
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 69