You only compress once: Towards effective and elastic BERT compression via exploit–explore stochastic nature gradient
Neurocomputing, ISSN: 0925-2312, Vol: 599, Page: 128140
2024
- 4Citations
- 19Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
Despite superior performance on various natural language processing tasks, pre-trained models such as BERT are challenged by deploying on resource-constraint devices. Most existing model compression approaches require re-compression or fine-tuning across diverse constraints to accommodate various hardware deployments. This practically limits the further application of model compression. Moreover, the ineffective training and searching process of existing elastic compression paradigms (Wang et al., 2020; Cai et al., 2020) prevents the direct migration to BERT compression. Motivated by the necessity of efficient inference across various constraints on BERT, we propose a novel approach, YOCO-BERT, to achieve compress once and deploy everywhere. Specifically, we first construct a huge search space with 1013 architectures, which covers nearly all configurations in BERT model. Then, we propose a novel stochastic nature gradient optimization method to guide the generation of optimal candidate architecture which could keep a balanced trade-off between explorations and exploitation. When a certain resource constraint is given, a lightweight distribution optimization approach is utilized to obtain the optimal network for target deployment without fine-tuning. Compared with state-of-the-art algorithms, YOCO-BERT provides more compact models, yet achieving 2.1%–4.5% average accuracy improvement on the GLUE benchmark. Besides, YOCO-BERT is also more effective, e.g., the training complexity is O(1) for N different devices. Codes available https://github.com/MAC-AutoML/YOCO-BERT.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S0925231224009111; http://dx.doi.org/10.1016/j.neucom.2024.128140; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85198604938&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S0925231224009111; https://dx.doi.org/10.1016/j.neucom.2024.128140
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know