Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

I implement a method called PEFT (Parameter Efficient Fine-Tuning) for finetuning whisperlarge v2 model on google-fleurs voice data for transcription task.

Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: True
  • load_in_4bit: False
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

Framework versions

  • PEFT 0.4.0.dev0
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train timespirit/whisperlargev2