Dataset Preview
Full Screen Viewer
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError Exception: TypeError Message: Couldn't cast array of type struct<Ios16.add: int64, Ios16.cast: int64, Ios16.gather: int64, Ios16.gatherNd: int64, Ios16.gelu: int64, Ios16.layerNorm: int64, Ios16.linear: int64, Ios16.matmul: int64, Ios16.mul: int64, Ios16.reduceArgmax: int64, Ios16.reshape: int64, Ios16.softmax: int64, Stack: int64, Transpose: int64> to {'Ios16.add': Value(dtype='int64', id=None), 'Ios16.cast': Value(dtype='int64', id=None), 'Ios16.gather': Value(dtype='int64', id=None), 'Ios16.gatherNd': Value(dtype='int64', id=None), 'Ios16.layerNorm': Value(dtype='int64', id=None), 'Ios16.linear': Value(dtype='int64', id=None), 'Ios16.matmul': Value(dtype='int64', id=None), 'Ios16.mul': Value(dtype='int64', id=None), 'Ios16.reduceArgmax': Value(dtype='int64', id=None), 'Ios16.reshape': Value(dtype='int64', id=None), 'Ios16.sigmoid': Value(dtype='int64', id=None), 'Ios16.softmax': Value(dtype='int64', id=None), 'Stack': Value(dtype='int64', id=None), 'Transpose': Value(dtype='int64', id=None)} Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2122, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}") TypeError: Couldn't cast array of type struct<Ios16.add: int64, Ios16.cast: int64, Ios16.gather: int64, Ios16.gatherNd: int64, Ios16.gelu: int64, Ios16.layerNorm: int64, Ios16.linear: int64, Ios16.matmul: int64, Ios16.mul: int64, Ios16.reduceArgmax: int64, Ios16.reshape: int64, Ios16.softmax: int64, Stack: int64, Transpose: int64> to {'Ios16.add': Value(dtype='int64', id=None), 'Ios16.cast': Value(dtype='int64', id=None), 'Ios16.gather': Value(dtype='int64', id=None), 'Ios16.gatherNd': Value(dtype='int64', id=None), 'Ios16.layerNorm': Value(dtype='int64', id=None), 'Ios16.linear': Value(dtype='int64', id=None), 'Ios16.matmul': Value(dtype='int64', id=None), 'Ios16.mul': Value(dtype='int64', id=None), 'Ios16.reduceArgmax': Value(dtype='int64', id=None), 'Ios16.reshape': Value(dtype='int64', id=None), 'Ios16.sigmoid': Value(dtype='int64', id=None), 'Ios16.softmax': Value(dtype='int64', id=None), 'Stack': Value(dtype='int64', id=None), 'Transpose': Value(dtype='int64', id=None)} The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
computePrecision
string | isUpdatable
string | availability
dict | inputSchema
list | version
string | specificationVersion
int64 | method
string | modelParameters
sequence | license
string | modelType
dict | author
string | userDefinedMetadata
dict | generatedClassName
string | outputSchema
list | storagePrecision
string | metadataOutputVersion
string | shortDescription
string | mlProgramOperationTypeHistogram
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mixed (Float16, Float32, Int32) | 0 | {
"iOS": "16.0",
"macCatalyst": "16.0",
"macOS": "13.0",
"tvOS": "16.0",
"watchOS": "9.0"
} | [
{
"dataType": "Float32",
"formattedType": "MultiArray (Float32 1 × 77)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "input_ids",
"shape": "[1, 77]",
"shortDescription": "The token ids that represent the input text",
"type": "MultiArray"
}
] | diffusers/stable-diffusion-xl-base-1.0 | 7 | predict | [] | OpenRAIL (https://huggingface.co/spaces/CompVis/stable-diffusion-license) | {
"name": "MLModelType_mlProgram"
} | Please refer to the Model Card available at huggingface.co/diffusers/stable-diffusion-xl-base-1.0 | {
"com.github.apple.coremltools.source": "torch==2.0.1+cu117",
"com.github.apple.coremltools.version": "7.0b1"
} | Stable_Diffusion_version_diffusers_stable_diffusion_xl_base_1_0_text_encoder | [
{
"dataType": "Float32",
"formattedType": "MultiArray (Float32)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "hidden_embeds",
"shape": "[]",
"shortDescription": "Hidden states after the encoder layers",
"type": "MultiArray"
},
{
"dataType": "Float32",
"formattedType": "MultiArray (Float32)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "pooled_outputs",
"shape": "[]",
"shortDescription": "The version of the `last_hidden_state` output after pooling",
"type": "MultiArray"
}
] | Float16 | 3.0 | Stable Diffusion generates images conditioned on text and/or other images as input through the diffusion process. Please refer to https://arxiv.org/abs/2112.10752 for details. | {
"Ios16.add": 37,
"Ios16.cast": 3,
"Ios16.gather": 1,
"Ios16.gatherNd": 1,
"Ios16.layerNorm": 25,
"Ios16.linear": 72,
"Ios16.matmul": 24,
"Ios16.mul": 36,
"Ios16.reduceArgmax": 1,
"Ios16.reshape": 120,
"Ios16.sigmoid": 12,
"Ios16.softmax": 12,
"Stack": 1,
"Transpose": 60
} |
Mixed (Float16, Float32, Int32) | 0 | {
"iOS": "16.0",
"macCatalyst": "16.0",
"macOS": "13.0",
"tvOS": "16.0",
"watchOS": "9.0"
} | [
{
"dataType": "Float32",
"formattedType": "MultiArray (Float32 1 × 77)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "input_ids",
"shape": "[1, 77]",
"shortDescription": "The token ids that represent the input text",
"type": "MultiArray"
}
] | diffusers/stable-diffusion-xl-base-1.0 | 7 | predict | [] | OpenRAIL (https://huggingface.co/spaces/CompVis/stable-diffusion-license) | {
"name": "MLModelType_mlProgram"
} | Please refer to the Model Card available at huggingface.co/diffusers/stable-diffusion-xl-base-1.0 | {
"com.github.apple.coremltools.source": "torch==2.0.1+cu117",
"com.github.apple.coremltools.version": "7.0b1"
} | Stable_Diffusion_version_diffusers_stable_diffusion_xl_base_1_0_text_encoder_2 | [
{
"dataType": "Float32",
"formattedType": "MultiArray (Float32)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "hidden_embeds",
"shape": "[]",
"shortDescription": "Hidden states after the encoder layers",
"type": "MultiArray"
},
{
"dataType": "Float32",
"formattedType": "MultiArray (Float32)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "pooled_outputs",
"shape": "[]",
"shortDescription": "The version of the `last_hidden_state` output after pooling",
"type": "MultiArray"
}
] | Float16 | 3.0 | Stable Diffusion generates images conditioned on text and/or other images as input through the diffusion process. Please refer to https://arxiv.org/abs/2112.10752 for details. | {
"Ios16.add": 97,
"Ios16.cast": 3,
"Ios16.gather": 1,
"Ios16.gatherNd": 1,
"Ios16.gelu": 32,
"Ios16.layerNorm": 65,
"Ios16.linear": 193,
"Ios16.matmul": 64,
"Ios16.mul": 32,
"Ios16.reduceArgmax": 1,
"Ios16.reshape": 320,
"Ios16.softmax": 32,
"Stack": 1,
"Transpose": 160
} |
Mixed (Float16, Float32, Int32) | 0 | {
"iOS": "16.0",
"macCatalyst": "16.0",
"macOS": "13.0",
"tvOS": "16.0",
"visionOS": "1.0",
"watchOS": "9.0"
} | [
{
"dataType": "Float16",
"formattedType": "MultiArray (Float16 2 × 4 × 128 × 128)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "sample",
"shape": "[2, 4, 128, 128]",
"shortDescription": "The low resolution latent feature maps being denoised through reverse diffusion",
"type": "MultiArray"
},
{
"dataType": "Float16",
"formattedType": "MultiArray (Float16 2)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "timestep",
"shape": "[2]",
"shortDescription": "A value emitted by the associated scheduler object to condition the model on a given noise schedule",
"type": "MultiArray"
},
{
"dataType": "Float16",
"formattedType": "MultiArray (Float16 2 × 2048 × 1 × 77)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "encoder_hidden_states",
"shape": "[2, 2048, 1, 77]",
"shortDescription": "Output embeddings from the associated text_encoder model to condition to generated image on text. A maximum of 77 tokens (~40 words) are allowed. Longer text is truncated. Shorter text does not reduce computation.",
"type": "MultiArray"
},
{
"dataType": "Float16",
"formattedType": "MultiArray (Float16 12)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "time_ids",
"shape": "[12]",
"shortDescription": "",
"type": "MultiArray"
},
{
"dataType": "Float16",
"formattedType": "MultiArray (Float16 2 × 1280)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "text_embeds",
"shape": "[2, 1280]",
"shortDescription": "",
"type": "MultiArray"
}
] | diffusers/stable-diffusion-xl-base-1.0 | 7 | predict | [] | OpenRAIL (https://huggingface.co/spaces/CompVis/stable-diffusion-license) | {
"name": "MLModelType_mlProgram"
} | Please refer to the Model Card available at huggingface.co/diffusers/stable-diffusion-xl-base-1.0 | {
"com.github.apple.coremltools.source": "torch==2.0.1+cu117",
"com.github.apple.coremltools.version": "7.0b1",
"com.github.apple.ml-stable-diffusion.version": "1.0.0"
} | recipe_4_50_bit_mixedpalette | [
{
"dataType": "Float32",
"formattedType": "MultiArray (Float32)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "noise_pred",
"shape": "[]",
"shortDescription": "Same shape and dtype as the `sample` input. The predicted noise to facilitate the reverse diffusion (denoising) process",
"type": "MultiArray"
}
] | Mixed (Float16, Palettized (1 bits), Palettized (2 bits), Palettized (4 bits), Palettized (6 bits), Palettized (8 bits)) | 3.0 | Stable Diffusion generates images conditioned on text or other images as input through the diffusion process. Please refer to https://arxiv.org/abs/2112.10752 for details. | {
"Concat": 14,
"ExpandDims": 6,
"Ios16.add": 722,
"Ios16.batchNorm": 46,
"Ios16.cast": 1,
"Ios16.constexprLutToDense": 775,
"Ios16.conv": 794,
"Ios16.cos": 2,
"Ios16.gelu": 70,
"Ios16.matmul": 280,
"Ios16.mul": 842,
"Ios16.realDiv": 46,
"Ios16.reduceMean": 512,
"Ios16.reshape": 675,
"Ios16.rsqrt": 210,
"Ios16.silu": 38,
"Ios16.sin": 2,
"Ios16.softmax": 140,
"Ios16.sqrt": 46,
"Ios16.square": 46,
"Ios16.sub": 256,
"SliceByIndex": 4,
"Split": 70,
"UpsampleNearestNeighbor": 2
} |
Mixed (Float32, Int32) | 0 | {
"iOS": "16.0",
"macCatalyst": "16.0",
"macOS": "13.0",
"tvOS": "16.0",
"watchOS": "9.0"
} | [
{
"dataType": "Float32",
"formattedType": "MultiArray (Float32 1 × 4 × 128 × 128)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "z",
"shape": "[1, 4, 128, 128]",
"shortDescription": "The denoised latent embeddings from the unet model after the last step of reverse diffusion",
"type": "MultiArray"
}
] | diffusers/stable-diffusion-xl-base-1.0 | 7 | predict | [] | OpenRAIL (https://huggingface.co/spaces/CompVis/stable-diffusion-license) | {
"name": "MLModelType_mlProgram"
} | Please refer to the Model Card available at huggingface.co/diffusers/stable-diffusion-xl-base-1.0 | {
"com.github.apple.coremltools.source": "torch==2.0.1",
"com.github.apple.coremltools.version": "7.0b1"
} | Stable_Diffusion_version_diffusers_stable_diffusion_xl_base_1_0_vae_decoder | [
{
"dataType": "Float32",
"formattedType": "MultiArray (Float32)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "image",
"shape": "[]",
"shortDescription": "Generated image normalized to range [-1, 1]",
"type": "MultiArray"
}
] | Float32 | 3.0 | Stable Diffusion generates images conditioned on text and/or other images as input through the diffusion process. Please refer to https://arxiv.org/abs/2112.10752 for details. | {
"Ios16.add": 46,
"Ios16.batchNorm": 29,
"Ios16.conv": 36,
"Ios16.linear": 4,
"Ios16.matmul": 2,
"Ios16.mul": 2,
"Ios16.realDiv": 30,
"Ios16.reduceMean": 60,
"Ios16.reshape": 65,
"Ios16.silu": 29,
"Ios16.softmax": 1,
"Ios16.sqrt": 30,
"Ios16.square": 30,
"Ios16.sub": 30,
"Transpose": 6,
"UpsampleNearestNeighbor": 3
} |
Mixed (Float16, Float32, Int32) | 0 | {
"iOS": "16.0",
"macCatalyst": "16.0",
"macOS": "13.0",
"tvOS": "16.0",
"watchOS": "9.0"
} | [
{
"dataType": "Float16",
"formattedType": "MultiArray (Float16 1 × 3 × 1024 × 1024)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "z",
"shape": "[1, 3, 1024, 1024]",
"shortDescription": "The input image to base the initial latents on normalized to range [-1, 1]",
"type": "MultiArray"
}
] | diffusers/stable-diffusion-xl-base-1.0 | 7 | predict | [] | OpenRAIL (https://huggingface.co/spaces/CompVis/stable-diffusion-license) | {
"name": "MLModelType_mlProgram"
} | Please refer to the Model Card available at huggingface.co/diffusers/stable-diffusion-xl-base-1.0 | {
"com.github.apple.coremltools.source": "torch==2.0.1+cu117",
"com.github.apple.coremltools.version": "7.0b1"
} | Stable_Diffusion_version_diffusers_stable_diffusion_xl_base_1_0_vae_encoder | [
{
"dataType": "Float32",
"formattedType": "MultiArray (Float32)",
"hasShapeFlexibility": "0",
"isOptional": "0",
"name": "latent",
"shape": "[]",
"shortDescription": "The latent embeddings from the unet model from the input image.",
"type": "MultiArray"
}
] | Float16 | 3.0 | Stable Diffusion generates images conditioned on text and/or other images as input through the diffusion process. Please refer to https://arxiv.org/abs/2112.10752 for details. | {
"Ios16.add": 34,
"Ios16.batchNorm": 21,
"Ios16.cast": 1,
"Ios16.conv": 28,
"Ios16.linear": 4,
"Ios16.matmul": 2,
"Ios16.mul": 2,
"Ios16.realDiv": 22,
"Ios16.reduceMean": 44,
"Ios16.reshape": 49,
"Ios16.silu": 21,
"Ios16.softmax": 1,
"Ios16.sqrt": 22,
"Ios16.square": 22,
"Ios16.sub": 22,
"Pad": 3,
"Transpose": 6
} |
null | null | null | null | null | null | null | null | null | null | 14447 | null | null | null | null | null | null | null |
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 9