[AIRG] Model Compression -- Binarized Neural Network


Date: Wed, 31 Oct 2018 23:37:33 +0000
From: Bryce XU <xxu373@xxxxxxxx>
Subject: [AIRG] Model Compression -- Binarized Neural Network

Hi, everyone!


I am a visiting Junior undergraduate from University of Electronic Science and Technology of China.


Next Wednesday ( 7th Nov. ), I would like to share you guys with something about Model Compression.


I will first talk about the significance of model compression in artificial intelligence and possible solutions. Then, I will show you how to binarize the model in order to compress and speed it up. Afterwards, I will talk a little bit about my experiments and some problems we may encounter in binarizing the model. Last, some recent work will be covered.


The paper you need to read is https://arxiv.org/abs/1602.02830.


I will also give you some optional papers if you are interested in this topic.

http://papers.nips.cc/paper/5647-binaryconnect-training-deep-neural-networks-with-b
https://link.springer.com/chapter/10.1007/978-3-319-46493-0_32

https://arxiv.org/abs/1612.01064

https://arxiv.org/abs/1806.07550


Best,


Xianda (Bryce) Xu





[← Prev in Thread] Current Thread [Next in Thread→]
  • [AIRG] Model Compression -- Binarized Neural Network, Bryce XU <=