The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Whisper with rotary encoder and learned-sinusoid+rotary decoder, grouplayernorm, hybrid attention, learned temperature scaling, mlp residual connection.. some other stuff... I'm still working on learned sinuisoidal rotary (all in one)-- aka tensor mismatch hell.. - (soon) i'll up code for warmtstart layer transfer full training loop dataset collator in pytorch and seperate hf integration if anyone is interested in that..

The goal with these models is to observe the effects different embedding schema and attention types may have on catastrophic forgetting in Whisper. By encoding positional information directly into the attention mechanism, rotary embeddings might allow the model to generalize better to different sequence lengths and potentially reduce the reliance on absolute positional information that could be specific to the pretraining data. Making the sinusoidal embeddings learnable in the decoder allows the model to adapt to fine-tuning data, potentially reducing the conflict between the pretrained positional information and the new data. This model has a few other interesting enhancments that were aimed at studying forgetting.

Catastrophic forgetting is not unique to whisper but it is more pronounced. It's also not uniform. This makes forgetting difficult to quantize and measure(wip). Also poking around with shared attention.. I'm less concerned about speed, more concerned about accuracy, and most concerned about memory and forgetting as per connectionist theory.. which is a huge issue. Next up, selective embedding information transfer between decoder and encoder. Stay tuned. pointer: Warmstarting is helpful with this one but not necsssary.. learns quick.

Other strategies to consider:

Regularization Techniques: L2 Regularization: Adding L2 regularization to the loss function can penalize large weight changes, encouraging the model to retain more of the pretrained knowledge. Elastic Weight Consolidation (EWC): EWC identifies important parameters from the pretraining phase and adds a penalty term to the loss function that discourages changes to these parameters. Synaptic Intelligence (SI): SI assigns importance weights to parameters based on their contribution to the pretraining task and uses these weights to regulate updates during fine-tuning.

Downloads last month
10