From 8afac56f3eb2a4953d0f9e101c094ee5c41e64b0 Mon Sep 17 00:00:00 2001
From: Neta Zmora <31280975+nzmora@users.noreply.github.com>
Date: Sun, 21 Jul 2019 14:43:52 +0300
Subject: [PATCH] Updated README

Added 2 new citations.
---
 README.md | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/README.md b/README.md
index f0b6b30..97cd0dd 100755
--- a/README.md
+++ b/README.md
@@ -395,6 +395,10 @@ This project is licensed under the Apache License 2.0 - see the [LICENSE.md](LIC
 
 ### Research papers citing Distiller:
 
+- Soroush Ghodrati, Hardik Sharma, Sean Kinzer, Amir Yazdanbakhsh, Kambiz Samadi, Nam Sung Kim, Doug Burger, Hadi Esmaeilzadeh.<br>
+*[Mixed-Signal Charge-Domain Acceleration of Deep Neural networks through Interleaved Bit-Partitioned Arithmetic]( https://arxiv.org/abs/1906.11915)*,<br>
+arXiv:1906.11915, 2019.
+
 - Gil Shomron, Tal Horowitz, Uri Weiser.<br>
 *[SMT-SA: Simultaneous Multithreading in Systolic Arrays](https://ieeexplore.ieee.org/document/8742541)*,<br>
 In IEEE Computer Architecture Letters (CAL), 2019.
@@ -411,13 +415,18 @@ In IEEE Computer Architecture Letters (CAL), 2019.
   *[SinReQ: Generalized Sinusoidal Regularization for Low-Bitwidth Deep Quantized Training](https://arxiv.org/abs/1905.01416),*<br>
   arXiv:1905.01416, 2019.
 
+- Goncharenko A., Denisov A., Alyamkin S., Terentev E.<br>
+*[Trainable Thresholds for Neural Network Quantization](https://rd.springer.com/chapter/10.1007/978-3-030-20518-8_26),*<br>
+In: Rojas I., Joya G., Catala A. (eds) Advances in Computational Intelligence Lecture Notes in Computer Science, vol 11507. Springer, Cham.  International Work-Conference on Artificial Neural Networks (IWANN 2019).
+
 - Ahmed T. Elthakeb, Prannoy Pilligundla, Hadi Esmaeilzadeh.<br>
   *[Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks](https://arxiv.org/abs/1906.06033),*
   arXiv:1906.06033, 2019
 
 - Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Christopher De Sa, Zhiru Zhang.<br>
   *[Improving Neural Network Quantization without Retraining using Outlier Channel Splitting](https://arxiv.org/abs/1901.09504),*<br>
-  arXiv:1901.09504, 2019
+  arXiv:1901.09504, 2019<br>
+  [Code](https://github.com/cornell-zhang/dnn-quant-ocs)
 
 - Angad S. Rekhi, Brian Zimmer, Nikola Nedovic, Ningxi Liu, Rangharajan Venkatesan, Miaorong Wang, Brucek Khailany, William J. Dally, C. Thomas Gray.<br>
 *[Analog/Mixed-Signal Hardware Error Modeling for Deep Learning Inference](https://research.nvidia.com/sites/default/files/pubs/2019-06_Analog/Mixed-Signal-Hardware-Error/40_2_Rekhi_AMS_ML.pdf)*,<br>
-- 
GitLab