목록Pruning (18)
둔비의 공부공간

Federated + Pruninghttps://arxiv.org/pdf/2303.06237 AAAI 2023 Accepted *communication cost : Federated Learning에서 server와 client간의 통신에서 발생하는 overhead를 말한다. AbstractFederated Learning은 privacy-preserving distributed deep learning paradigm이다.하지만, Federated Learning은 computation과 communucation cost가 필요하기 때문에 특히 mobile같은 환경의 장비에겐 부담된다. 기존 Pruning 방법들은 client server간의 낮은 양방향 communication cost, clie..

https://openreview.net/forum?id=Y9t7MqZtCR Sparse Weight Averaging with Multiple Particles for Iterative...Given the ever-increasing size of modern neural networks, the significance of sparse architectures has surged due to their accelerated inference speeds and minimal memory demands. When it comes to...openreview.net(재구현이 되는지는 검증하지 못했는데, 위 링크에 zip파일에 보면 코드도 올라와있다.) AbstractIMP(Iterative Magni..

https://github.com/ZIB-IOL/SMS GitHub - ZIB-IOL/SMS: Code to reproduce the experiments of the ICLR24-paper: "Sparse Model Soups: A Recipe for Improved PruningCode to reproduce the experiments of the ICLR24-paper: "Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging" - ZIB-IOL/SMSgithub.comhttps://arxiv.org/abs/2306.16788 Sparse Model Soups: A Recipe for Improved Pruning via Mod..

https://github.com/alooow/fantastic_weights_paper GitHub - alooow/fantastic_weights_paper: Repository for the paper: "Fantastic Weights and How to Find Them: Where to Prune in DyRepository for the paper: "Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse Training" - alooow/fantastic_weights_papergithub.comNeurIPS 2023 Accepted AbstractDynamic Sparse Training은 학습과정에서 adapti..

https://arxiv.org/abs/2210.13810 Toward domain generalized pruning by scoring out-of-distribution importance Filter pruning has been widely used for compressing convolutional neural networks to reduce computation costs during the deployment stage. Recent studies have shown that filter pruning techniques can achieve lossless compression of deep neural networks, re arxiv.org Accepted in Workshop o..

https://arxiv.org/abs/2109.14960 Prune Your Model Before Distill ItKnowledge distillation transfers the knowledge from a cumbersome teacher to a small student. Recent results suggest that the student-friendly teacher is more appropriate to distill since it provides more transferable knowledge. In this work, we propose thearxiv.orghttps://github.com/ososos888/prune-then-distill GitHub - ososos888..