# jemdoc: menu{MENU}{publications.html}, showsource = Nadav Cohen --- Publications (see also [https://scholar.google.com/citations?hl=en&user=DmzoCRMAAAAJ&view_op=list_works&sortby=pubdate Google Scholar]) *\+* /Supervised student paper/ \n *\** /Primary authorship/ === Preprints *\+* [https://arxiv.org/abs/2402.07875 Implicit Bias of Policy Gradient in Linear Quadratic Control: Extrapolation to Unseen Initial States] \n Noam Razin, Yotam Alexander, Edo Cohen-Karlik, Raja Giryes, Amir Globerson and Nadav Cohen. Feb'24. \n /Preprint./ === Conference Proceedings *\+* [https://openreview.net/forum?id=4aIpgq1nuI What Makes Data Suitable for a Locally Connected Neural Network? A Necessary and Sufficient Condition Based on Quantum Entanglement] ([https://arxiv.org/abs/2303.11249 extended arXiv version]) \n Yotam Alexander, Nimrod De La Vega, Noam Razin and Nadav Cohen. Mar'23 (v1), Oct'23 (v2). \n /Conference on Neural Information Processing Systems (*NeurIPS*) 2023, Spotlight Track (*top 3%*)./ *\+* [https://openreview.net/forum?id=ayZpFoAu5c On the Ability of Graph Neural Networks to Model Interactions Between Vertices] ([https://arxiv.org/abs/2211.16494 extended arXiv version]) \n Noam Razin, Tom Verbin and Nadav Cohen. Nov'22 (v1), Oct'23 (v2). \n /Conference on Neural Information Processing Systems (*NeurIPS*) 2023./ *\+* [https://openreview.net/forum?id=k9CF4h3muD Learning Low Dimensional State Spaces with Overparameterized Recurrent Neural Nets] \n Edo Cohen-Karlik, Itamar Menuhin-Gruman, Raja Giryes, Nadav Cohen and Amir Globerson. Oct'22 (v1), Mar'23 (v2). \n /International Conference on Learning Representations (*ICLR*) 2023./ *\+* [https://proceedings.mlr.press/v162/razin22a.html Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks] ([https://arxiv.org/abs/2201.11729 extended arXiv version]) \n Noam Razin, Asaf Maman and Nadav Cohen. Jan'22. \n /International Conference on Machine Learning (*ICML*) 2022./ [https://proceedings.mlr.press/v151/cohen-karlik22a.html On the Implicit Bias of Gradient Descent for Temporal Extrapolation] \n Edo Cohen-Karlik, Avichai Ben David, Nadav Cohen and Amir Globerson. Feb'22. \n /Conference on Artificial Intelligence and Statistics (*AISTATS*) 2022./ *\+* [https://openreview.net/forum?id=iX0TSH45eOd Continuous vs. Discrete Optimization of Deep Neural Networks] ([https://arxiv.org/abs/2107.06608 extended arXiv version]) \n Omer Elkabetz and Nadav Cohen. Jul'21 (v1), Dec'21 (v2). \n /Conference on Neural Information Processing Systems (*NeurIPS*) 2021, Spotlight Track (*top 3%*)./ *\+* [http://proceedings.mlr.press/v139/razin21a.html Implicit Regularization in Tensor Factorization] \n Noam Razin, Asaf Maman and Nadav Cohen. Feb'21. \n /International Conference on Machine Learning (*ICML*) 2021./ *\+* [https://papers.nips.cc/paper/2020/hash/f21e255f89e0f258accbe4e984eef486-Abstract.html Implicit Regularization in Deep Learning May Not Be Explainable by Norms] ([https://arxiv.org/abs/2005.06398 extended arXiv version]) \n Noam Razin and Nadav Cohen. May'20 (v1), Oct'20 (v2). \n /Conference on Neural Information Processing Systems (*NeurIPS*) 2020./ *\** [https://papers.nips.cc/paper/8960-implicit-regularization-in-deep-matrix-factorization Implicit Regularization in Deep Matrix Factorization] \n Sanjeev Arora, Nadav Cohen, Wei Hu and Yuping Luo (alphabetical order). Jun'19 (v1), Oct'19 (v2). \n /Conference on Neural Information Processing Systems (*NeurIPS*) 2019, Spotlight Track (*top 3%*)./ *\** [https://openreview.net/pdf?id=SkMQg3C5K7 A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks] \n Sanjeev Arora, Nadav Cohen, Noah Golowich and Wei Hu (alphabetical order). Oct'18 (v1), Nov'18 (v2). \n /International Conference on Learning Representations (*ICLR*) 2019./ *\** [http://proceedings.mlr.press/v80/arora18a.html On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization] \n Sanjeev Arora, Nadav Cohen and Elad Hazan (alphabetical order). Feb'18. \n /International Conference on Machine Learning (*ICML*) 2018./ [https://openaccess.thecvf.com/content_cvpr_2018/html/Shocher_Zero-Shot_Super-Resolution_Using_CVPR_2018_paper.html "Zero-Shot" Super-Resolution Using Deep Internal Learning] \n Assaf Shocher, Nadav Cohen and Michal Irani. Dec'17. \n /Conference on Computer Vision and Pattern Recognition (*CVPR*) 2018./ *\** [https://openreview.net/pdf?id=S1JHhv6TW Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions] ([https://arxiv.org/abs/1703.06846 extended arXiv version]) \n Nadav Cohen, Ronen Tamari and Amnon Shashua. Apr'17. \n /International Conference on Learning Representations (*ICLR*) 2018, Oral Track (*top 1%*)./ [https://openreview.net/pdf?id=SywXXwJAb Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design] ([https://arxiv.org/abs/1704.01552 extended arXiv version]) \n Yoav Levine, David Yakira, Nadav Cohen and Amnon Shashua. Apr'17. \n /International Conference on Learning Representations (*ICLR*) 2018./ *\** [https://openreview.net/pdf?id=BkVsEMYel Inductive Bias of Deep Convolutional Networks through Pooling Geometry] \n Nadav Cohen and Amnon Shashua. May'16 (v1), Nov'16 (v2). \n /International Conference on Learning Representations (*ICLR*) 2017./ *\** [https://proceedings.mlr.press/v48/cohenb16.html Convolutional Rectifier Networks as Generalized Tensor Decompositions] ([https://arxiv.org/abs/1603.00162 extended arXiv version]) \n Nadav Cohen and Amnon Shashua. Mar'16. \n /International Conference on Machine Learning (*ICML*) 2016./ *\** [http://www.jmlr.org/proceedings/papers/v49/cohen16.html On the Expressive Power of Deep Learning: A Tensor Analysis] \n Nadav Cohen, Or Sharir and Amnon Shashua. Sep'15 (v1), Feb'16 (v2). \n /Conference on Learning Theory (*COLT*) 2016./ *\** [http://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Cohen_Deep_SimNets_CVPR_2016_paper.html Deep SimNets] \n Nadav Cohen, Or Sharir and Amnon Shashua. Jun'15 (v1), Nov'15 (v2). \n /Conference on Computer Vision and Pattern Recognition (*CVPR*) 2016./ === Journals [https://arxiv.org/abs/2210.12497 Deep Linear Networks for Matrix Completion --- An Infinite Depth Limit] \n Nadav Cohen, Govind Menon and Zsolt Veraszto. Oct'22. \n /Society for Industrial and Applied Mathematics (*SIAM*) Journal on Applied Dynamical Systems./ [https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.065301 Quantum Entanglement in Deep Learning Architectures] ([https://arxiv.org/abs/1803.09780 arXiv version]) \n Yoav Levine, Or Sharir, Nadav Cohen, Amnon Shashua. Mar'18 (v1), Feb'19 (v2). \n /Physical Review Letters (*PRL*)./ === Book Chapters [https://www.cambridge.org/core/books/abs/mathematical-aspects-of-deep-learning/bridging-manybody-quantum-physics-and-deep-learning-via-tensor-networks/2FB534193E0334912020ED4FCD96C4ED Bridging Many-Body Quantum Physics and Deep Learning via Tensor Networks] \n Yoav Levine, Or Sharir, Nadav Cohen, Amnon Shashua. Dec'22. \n /[https://www.cambridge.org/core/books/mathematical-aspects-of-deep-learning/8D9B41D1E9BB8CA515E93412EECC2A7E Mathematical Aspects of Deep Learning], Cambridge University Press./ [https://www.sciencedirect.com/science/article/pii/B9780128244470000133 Tensors for Deep Learning Theory: Analyzing Deep Learning Architectures via Tensorization] \n Yoav Levine, Noam Wies, Or Sharir, Nadav Cohen, Amnon Shashua. Nov'21. \n /[https://www.sciencedirect.com/book/9780128244470/tensors-for-data-processing Tensors for Data Processing: Theory, Methods and Applications], Academic Press./ === Invited Papers, Workshops and Technical Reports *\** [https://arxiv.org/abs/1705.02302 Analysis and Design of Convolutional Networks via Hierarchical Tensor Decompositions] \n Nadav Cohen, Or Sharir, Yoav Levine, Ronen Tamari, David Yakira and Amnon Shashua. Jun'17. \n /Intel Collaborative Research Institute Special Issue on Deep Learning Theory./ [https://arxiv.org/abs/1610.04167v2 Tensorial Mixture Models] \n Or Sharir, Ronen Tamari, Nadav Cohen and Amnon Shashua. Oct'16. \n /Technical Report./ *\** [https://fb56552f-a-62cb3a1a-s-sites.googlegroups.com/site/deeplearningworkshopnips2014/41.pdf SimNets: A Generalization of Convolutional Networks] \n Nadav Cohen and Amnon Shashua. Oct'14 (v1), Dec'14 (v2). \n /Conference on Neural Information Processing Systems (NeurIPS) 2014, Deep Learning Workshop./