Papers
arxiv:2010.06993

Weight Squeezing: Reparameterization for Knowledge Transfer and Model Compression

Published on Oct 14, 2020
Authors:
,
,

Abstract

In this work, we present a novel approach for simultaneous knowledge transfer and model compression called Weight Squeezing. With this method, we perform knowledge transfer from a teacher model by learning the mapping from its weights to smaller student model weights. We applied Weight Squeezing to a pre-trained text classification model based on BERT-Medium model and compared our method to various other knowledge transfer and model compression methods on GLUE multitask benchmark. We observed that our approach produces better results while being significantly faster than other methods for training student models. We also proposed a variant of Weight Squeezing called Gated Weight Squeezing, for which we combined fine-tuning of BERT-Medium model and learning mapping from BERT-Base weights. We showed that fine-tuning with Gated Weight Squeezing outperforms plain fine-tuning of BERT-Medium model as well as other concurrent SoTA approaches while much being easier to implement.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2010.06993 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2010.06993 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2010.06993 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.