Nadav Cohen  —  Selected Talks

FoCM 2023 (Paris, France)
Invited talk
What Makes Data Suitable for Deep Learning?
[slides]

ICML 2022 (Baltimore, MD, USA)
Invited talk
Continuous vs. Discrete Optimization of Deep Neural Networks
[video|slides]

Workshop on Tensor Methods and their Applications in the Physical and Data Sciences 2021 (virtual)
Invited talk
Implicit Regularization in Deep Learning: Lessons Learned from Matrix and Tensor Factorization
[video|slides]

IMVC 2020 (virtual)
Invited keynote talk
Practical Implications of Theoretical Deep Learning
[video|slides]

NeurIPS 2019 (Vancouver, Canada)
Contributed talk
Implicit Regularization in Deep Matrix Factorization
[video|slides]

AI Week 2019 (Tel Aviv-Yafo, Israel)
Invited talk
Analyzing Optimization and Generalization in Deep Learning via Trajectories of Gradient Descent
[video|slides]

CECAM Workshop on Quantum Computing and Quantum Chemistry 2019 (Tel Aviv-Yafo, Israel)
Invited talk
Expressiveness in Deep Learning via Quantum Entanglement
[slides]

Simons Institute Workshop on Frontiers of Deep Learning 2019 (Berkeley, CA, USA)
Invited talk
Analyzing Optimization and Generalization in Deep Learning via Trajectories of Gradient Descent
[video|slides]

ICERM Workshop on Theory and Practice in Machine Learning and Computer Vision 2019 (Providence, RI, USA)
Invited talk
Analyzing Optimization in Deep Learning via Trajectories
[video|slides]

ICML 2018 (Stockholm, Sweden)
Contributed talk
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
[video|slides]

ICLR 2018 (Vancouver, Canada)
Contributed talk
Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions
[video|slides]

Symposium on the Mathematical Theory of Deep Neural Networks 2018 (Princeton, NJ, USA)
Invited talk
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
[video|slides]

Symposium on Physics and Machine Learning 2017 (New York City, NY, USA)
Invited talk
Understanding Deep Learning via Physics: The Use of Quantum Entanglement for Studying the Inductive Bias of Convolutional Networks
[slides]

Mathematics of Deep Learning Workshop 2017 (Berlin, Germany)
Invited talk
Expressiveness of Convolutional Networks via Hierarchical Tensor Decompositions
[slides]

AAAI Spring Symposium Series 2017 (Palo Alto, CA, USA)
Invited talk
Expressive Efficiency and Inductive Bias of Convolutional Networks: Analysis & Design via Hierarchical Tensor Decompositions
[slides]

CVPR 2017 (Honolulu, HI, USA)
Invited talk
Expressive Efficiency and Inductive Bias of Convolutional Networks: Analysis & Design via Hierarchical Tensor Decompositions
[slides]

GAMM 2017 (Weimar, Germany)
Invited talk
On the Expressive Power of Deep Learning: A Tensor Analysis
[slides]

NeurIPS 2016 (Barcelona, Spain)
Invited talk
Inductive Bias of Deep Convolutional Networks through Pooling Geometry
[slides]

ICML 2016 (New York City, NY, USA)
Contributed talk
Convolutional Rectifier Networks as Generalized Tensor Decompositions
[video|slides]

COLT 2016 (New York City, NY, USA)
Contributed talk
On the Expressive Power of Deep Learning: A Tensor Analysis
[video|slides]