Guido Montúfar
UCLA Department of Mathematics and Department of Statistics
Los Angeles, CA, USA, 90095-1555
Phone: +1 (310) 206-2671
Email: montufar at math.ucla.edu

Max Planck Institute for Mathematics in the Sciences
Inselstrasse 22, 04103 Leipzig, Germany
Phone: +49 (0) 341 - 9959 - 564
Email: montufar at mis.mpg.de
Short CV
Assistant Professor, Departments of Mathematics and Statistics, UCLA (since 07/2017)
Research Group Leader, Mathematical Machine Learning Group, Max Planck Institute for Mathematics in the Sciences (since 07/2018)
Postdoc, Max Planck Institute for Mathematics in the Sciences, Information Theory of Cognitive Systems Group (06/2013-09/2017)
Research Associate, Department of Mathematics, Pennsylvania State University (02/2012-05/2013)
PhD in Mathematics, MPI MIS, Leipzig University (10/2012)
Research Assistant, Institute for Theoretical Physics, TU-Berlin (03/2008-12/2008)
Diplom Physiker, TU-Berlin (12/2008)
Teaching Assistant, Institute for Mathematics, TU-Berlin (03/2006-02/2008)
Diplom Mathematiker, TU-Berlin (08/2007)
Research Interests
Deep Learning Theory
Mathematical Machine Learning
Graphical Models
Information Geometry
Algebraic Statistics



Publications


2022

Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks.
Benjamin Bowman and Guido Montufar. The Tenth International Conference on Learning Representations (ICLR 2022). Preprint [arXiv:2201.04738].
Learning curves for Gaussian process regression with power-law priors and targets.
Hui Jin, Pradeep Kr. Banerjee, and Guido Montufar. The Tenth International Conference on Learning Representations (ICLR 2022). Preprint [arXiv:2110.12231].
Power-law asymptotics of the generalization error for GP regression under power-law priors and targets.
Workshop version presented at Workshop on Bayesian Deep Learning NeurIPS, 2021.
The Geometry of Memoryless Stochastic Policy Optimization in Infinite-Horizon POMDPs.
Johannes Müller and Guido Montufar. The Tenth International Conference on Learning Representations (ICLR 2022). Preprint [arXiv:2110.07409].
2021

Weisfeiler and Lehman go cellular: CW networks.
Christian Bodnar, Fabrizio Frasca, Nina Otter, Yu Guang Wang, Pietro Lio, Guido Montufar, and Michael Bronstein. Advances in Neural Information Processing Systems 34 (NeurIPS 2021). Preprint [arXiv:2106.12575]. Repo [GitHub]. Virtual poster [NeurIPS].
On the expected complexity of maxout networks.
Hanna Tseran and Guido Montufar. Advances in Neural Information Processing Systems 34 (NeurIPS 2021). Preprint [arXiv:2107.00379]. Repo [GitHub]. Virtual poster [NeurIPS].
Geometry of Linear Convolutional Networks.
Kathlen Kohn, Thomas Merkh, Guido Montufar, Matthew Trager. Preprint [arXiv:2108.01538].
Sharp bounds for the number of regions of maxout networks and vertices of Minkowski sums.
Guido Montufar, Yue Ren, and Leon Zhang. Preprint [arXiv:2104.08135].
A top-down approach to attain decentralized multi-agents.
Alex Tong Lin, Guido Montufar, and Stanley Osher. Handbook of Reinforcement Learning and Control, pp 419-431, Springer, 2021.
Distributed learning via filtered hyperinterpolation on manifolds.
Guido Montufar and Yu Guang Wang. Foundations of computational mathematics, 2021. Preprint [arXiv:2007.09392].
How framelets enhance graph neural networks.
Xuebin Zheng, Bingxin Zhou, Junbin Gao, Yu Guang Wang, Pietro Lio, Ming Li, and Guido Montufar. In Proceedings of the 38th International Conference on Machine Learning (ICML), PMLR 139:12761-12771, 2021. Preprint [arXiv:2102.06986]. Repo [GitHub].
Weisfeiler and Lehman go topological: message passing simplicial networks.
Christian Bodnar, Fabrizio Frasca, Yu Guang Wang, Nina Otter, Guido Montufar, Pietro Lio, and Michael Bronstein. In Proceedings of the 38th International Conference on Machine Learning (ICML), PMLR 139:1026-1037, 2021. Virtual poster [ICML]. Preprint [arXiv:2103.03212].
Workshop version presented at Workshop on Geometrical and Topological Representation Learning ICLR, 2021.
Tight bounds on the smallest eigenvalue of the neural tangent kernel for deep ReLU networks.
Quynh Nguyen, Marco Mondelli, and Guido Montufar. In Proceedings of the 38th International Conference on Machine Learning (ICML), PMLR 139:8119-8129, 2021. Preprint [arXiv:2012.11654].
Information complexity and generalization bounds.
Pradeep Kumar Banerjee and Guido Montufar. In IEEE international symposium on information theory (ISIT), 2021. Preprint [arXiv:2105.01747].
Wasserstein proximal of GANs.
Alex Tong Lin, Wuchen Li, Stanley Osher, and Guido Montufar. In Proceedings of the 5th International Conference Geometric Science of Information (GSI), LNCS, vol 12829, pp 524-533, Springer, 2021. Preprint [arXiv:2102.06862], [CAM report 18-53], [RG]. Poster [pdf].
Decentralized multi-agents by imitations of a centralized controller.
Alex Tong Lin, Mark J. Debord, Katia Estabridis, Gary Hewer, Guido Montufar, and Stanley Osher. In 2nd Annual Conference on Mathematical and Scientific Machine Learning (MSML), 2021. Preprint [arXiv:1902.02311].
Wasserstein distance to independence models.
Tuerkue Ozluem Celik, Asgar Jamneshan, Guido Montufar, Bernd Sturmfels, and Lorenzo Venturello. Journal of Symbolic Computation, 104:855-873, 2021. Preprint [arXiv:2003.06725].
PAC-Bayes and information complexity.
Pradeep Kr. Banerjee and Guido Montufar. Presented at Workshop on neural compression: from information theory to applications ICLR, 2021.

2020

Implicit bias of gradient descent for mean squared error regression with wide neural networks.
Hui Jin and Guido Montufar. Preprint [arXiv:2006.07356]. Repo [GitHub].
Optimization theory for ReLU neural networks trained with normalization layers.
Yonatan Dukler, Quanquan Gu, and Guido Montufar. In Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR 119:2751-2760, 2020. Virtual poster [ICML]. Preprint [arXiv:2006.06878].
Haar graph pooling.
Yu Guang Wang, Ming Li, Zheng Ma, Guido Montufar, Xiaosheng Zhuang, Yanan Fan. In Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR 119:9952-9962, 2020. Virtual poster [ICML]. Preprint [arXiv:1909.11580]. Repo [GitHub].
The variational deficiency bottleneck.
Pradeep Kumar Banerjee and Guido Montufar. In Proceedings of the international joint conference on neural networks (IJCNN), 2020. Preprint [arXiv:1810.11677].
Kernelized Wasserstein Natural Gradient.
Michael Arbel, Arthur Gretton, Wuchen Li, Guido Montufar. In International Conference on Learning Representations (ICLR), 2020. Preprint [arXiv:1910.09652]. Repo [GitHub].
Factorized Mutual Information Maximization.
Thomas Merkh and Guido Montufar. Kybernetika 56(5):948-978, 2020. Preprint [arxiv:1906.05460], [RG].
Ricci curvature for parametric statistics via optimal transport.
W. Li and G. Montufar. Information Geometry 3(1):89-117, 2020. Preprint [arXiv 1807.07095].
Can neural networks learn persistent homology features?.
Guido Montufar, Nina Otter, and Yu Guang Wang. Presented at Workshop on topological data analysis and beyond NeurIPS, 2020. Preprint [arXiv:2011.14688].

2019

Optimal Transport to a Variety.
T. O. Celik, A. Jamneshan, G. Montufar, B. Sturmfels, L. Venturello. In International Conference on Mathematical Aspects of Computer and Information Sciences (MACIS) 2019, LNCS, vol 11989, pp 364-381, Springer, 2020. Preprint [arxiv:1909.11716], [MPI MIS 7/2021].
Wasserstein of Wasserstein loss for learning generative models.
Y. Dukler, W. Li, A. Lin, and G. Montufar. In Proceedings of the 36th International Conference on Machine Learning (ICML), PMLR 97:1716-1725, 2019. [BibTex]. Preprint [MPI MIS 13/2019]. Repo [GitHub].
Wasserstein Diffusion Tikhonov Regularization.
Alex Tong Lin, Yonatan Dukler, Wuchen Li, and Guido Montufar. Presented at Optimal Transport and Machine Learning Workshop NeurIPS, 2019. Preprint [arxiv:1909.06860], [RG].
A continuity result for optimal memoryless planning in POMDPs.
J. Rauh, N. Ay, G. Montufar. Presented at The 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM), 2019. [pdf]. Preprint [MPI MIS 5/2021], [RG].
Task-Agnostic Constraining in Average Reward POMDPs.
G. Montufar, J. Rauh, N. Ay. Presented at Task-Agnostic Reinforcement Learning Workshop ICLR, 2019. [pdf]. Preprint [MPI MIS 9/2021], [RG].
Stochastic Feedforward Neural Networks: Universal Approximation.
T. Merkh and G. Montufar. Preprint [arXiv:1910.09763], [RG].
How Well Do WGANs Estimate the Wasserstein Metric?.
Anton Mallasto, Guido Montufar, Augusto Gerolin. Preprint [arXiv:1910.03875].

2018

Natural Gradient via Optimal Transport.
W. Li and G. Montufar. Information Geometry 1, issue 2, pp 181-214, 2018. Preprint [arXiv:1803.07033].
Restricted Boltzmann Machines: Introduction and Review.
G. Montufar. Information geometry and its applications (IGAIA IV), pp 75-115, Springer, 2018. Preprint [arXiv:1806.07066].
Computing the Unique Information.
P. K. Banerjee, J. Rauh, and G. Montufar. IEEE International Symposium on Information Theory (ISIT), pages 141-145, 2018. Preprint [arXiv:1709.07487]. Repo [GitHub].
Mixtures and Products in two Graphical Models.
A. Seigal and G. Montufar. Journal of Algebraic Statistics vol 9 no 1, 2018. Preprint [arXiv:1709.05276].
Uncertainty and Stochasticity of Optimal Policies.
G. Montufar, J. Rauh, N. Ay. In Proceedings of the 11th Workshop on Uncertainty Processing (WUPES), pp 133-140, 2018. Preprint [MPI MIS 8/2021], [RG].

2017

Geometry of Policy Improvement.
G. Montufar and J. Rauh. In Geometric Science of Information (GSI), LNCS, vol 10589, pp 282-290, Springer, 2017. [BibTex]. Preprint [arXiv:1704.01785].
Morphological Computation: The good, the bad, and the ugly.
K. Ghazi-Zahedi, R. Deimel, G. Montufar, V. Wall, and O. Brock. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 464-469, 2017. [BibTex]. Preprint [RG].
Dimension of Marginals of Kronecker Product Models.
G. Montufar and J. Morton. SIAM Journal on Applied Algebra and Geometry (SIAGA) 1(1):126-151, 2017. [BibTex]. Preprint [arXiv:1511.03570], [MPI MIS 75/2015]. Supplement [JacobianKronecker.m].
Stochasticity of optimal policies for POMDPs.
G. Montufar, K. Ghazi-Zahedi, N. Ay. Presented at Reinforcement Learning and Decision Making (RLDM), 2017.
Notes on the number of linear regions of deep neural networks.
G. Montufar. Presented at Mathematics of Deep Learning, Sampling Theory and Applications (SampTA), 2017. Preprint [RG].

2016

Evaluating Morphological Computation in Muscle and DC-motor Driven Models of Hopping Movements.
K. Ghazi-Zahedi, D. Haeufle, G. Montufar, S. Schmitt, and N. Ay. Frontiers in Robotics and AI 3(42):frobt.2016.00042, 2016. [BibTeX]. Preprint [arXiv:1512.00250].
Information Theoretically Aided Reinforcement Learning for Embodied Agents.
G. Montufar, K. Ghazi-Zahedi, and N. Ay. Preprint [arXiv:1605.09735], [RG].
Geometry and Determinism of Optimal Stationary Control in Partially Observable Markov Decision Processes.
G. Montufar, K. Ghazi-Zahedi, and N. Ay. Preprint [arXiv:1503.07206], [MPI MIS 22/2016].

2015

A Theory of Cheap Control in Embodied Systems.
G. Montufar, K. Zahedi, and N. Ay. PLoS Comput Biol 11(9):e1004427, 2015. [BibTeX]. Preprint [arXiv:1407.6836], [MPI MIS 70/2014].
Geometry and Expressive Power of Conditional Restricted Boltzmann Machines.
G. Montufar, N. Ay, and K. Zahedi. Journal of Machine Learning Research JMLR 16(Dec):2405-2436, 2015. [BibTeX]. Preprint [arXiv:1402.3346], [MPI MIS 16/2014].
When Does a Mixture of Products Contain a Product of Mixtures?
G. Montufar and J. Morton. SIAM Journal on Discrete Mathematics (SIDMA) 29(1):321-347, 2015. [BibTeX]. Preprint [arXiv:1206.0387], [MPI MIS 98/2014].
Deep Narrow Boltzmann Machines are Universal Approximators.
G. Montufar. In Third International Conference on Learning Representations (ICLR), 2015. [BibTeX]. Preprint [arXiv:1411.3784], [MPI MIS 113/2014].
A Comparison of Neural Network Architectures.
G. Montufar. Presented at Deep Learning Workshop ICML, 2015. [pdf], [pdf].
Universal Approximation of Markov Kernels by Shallow Stochastic Feedforward Networks.
G. Montufar. Preprint [arXiv:1503.07211], [MPI MIS 23/2015].

2014

On the Number of Linear Regions of Deep Neural Networks.
G. Montufar, R. Pascanu, K. Cho, and Y. Bengio. Neural Information Processing Systems (NIPS) 27, pp. 2924-2932, 2014. [BibTeX]. Preprint [MPI MIS 73/2014], [arXiv 1402.1869]
On the Number of Response Regions of Deep Feedforward Networks with Piecewise Linear Activations.
R. Pascanu, G. Montufar, and Y. Bengio. In Second International Conference on Learning Representations (ICLR), 2014. [BibTeX]. Preprint [MPI MIS 72/2014], [arXiv 1312.6098]
On the Fisher Information Metric of Conditional Probability Polytopes.
G. Montufar, J. Rauh, and N. Ay. Entropy 16(6):3207-3233, 2014. [BibTeX]. Preprint [MPI MIS 87/2014], [arXiv 1404.0198]
Scaling of Model Approximation Errors and Expected Entropy Distances.
G. Montufar and J. Rauh. Kybernetika 50(2):234-245, 2014. [BibTeX]. Workshop version WUPES 2012, pp. 137-148. Preprint [arXiv 1207.3399]
Universal Approximation Depth and Errors of Narrow Belief Networks with Discrete Units.
G. Montufar. Neural Computation 26(7):1386-1407, 2014. [BibTeX]. Preprint [MPI MIS 74/2014], [arXiv 1303.7461]
Sequential Recurrence-Based Multidimensional Universal Source Coding of Lempel-Ziv Type.
T. Krueger, G. Montufar, R. Seiler, and R. Siegmund-Schultze. Preprint [MPI MIS 86/2014], [arXiv 1408.4433].

2013

Universally Typical Sets for Ergodic Sources of Multidimensional Data.
T. Krüger, G. Montufar, R. Seiler, and R. Siegmund-Schultze. Kybernetika 49(6):868-882, 2013. [BibTeX]. Preprint [MPI MIS 20/2011], [arXiv 1105.0393]
Mixture Decompositions of Exponential Families Using a Decomposition of their Sample Spaces.
G. Montufar. Kybernetika 49(1):23-39, 2013. [BibTeX]. Preprint [MPI MIS 39/2010], [arXiv 1008.0204]
Maximal Information Divergence from Statistical Models defined by Neural Networks.
G. Montufar, J. Rauh, and N. Ay. In Geometric Science of Information (GSI), LNCS, vol 8085, pp 759-766, Springer, 2013. [BibTeX]. Preprint [MPI MIS 31/2013], [arXiv 1303.0268]
Selection Criteria for Neuromanifolds of Stochastic Dynamics.
N. Ay, G. Montufar, J. Rauh. In Advances in Cognitive Neurodynamics (III), pp 147-154, 2013. [BibTeX]. Preprint [MPI MIS 15/2011]

2012

Kernels and Submodels of Deep Belief Networks.
G. Montufar and J. Morton. Deep Learning and Unsupervised Feature Learning Workshop NIPS, 2012. Preprint [arXiv 1211.0932]

2011

Expressive Power and Approximation Errors of Restricted Boltzmann Machines.
G. Montufar, J. Rauh, and N. Ay. Neural Information Processing Systems (NIPS) 24, pp 415-423, 2011. [BibTeX]. Preprint [MPI MIS 27/2011], [arXiv 1406.3140]
Refinements of Universal Approximation Results for Restricted Boltzmann Machines and Deep Belief Networks.
G. Montufar and N. Ay. Neural Computation 23(5):1306-1319, 2011. [BibTeX]. Preprint [MPI MIS 23/2010], [arXiv 1005.1593]
Mixture Models and Representational Power of RBMs, DBNs and DBMs.
G. Montufar. Deep Learning and Unsupervised Feature Learning Workshop NIPS, 2010. [pdf], [pdf]

Theses

On the Expressive Power of Discrete Mixture Models, Restricted Boltzmann Machines, and Deep Belief Networks—A Unified Mathematical Treatment.
PhD Thesis, Leipzig University, October 2012. Supervisor: N. Ay. [pdf] (14.4 MB, 155 pages, 30 figures)
Theory of Transport and Photon-Statistics in a Biased Nanostructure.
German Diplom in Physics, Institute for Theoretical Physics, TU-Berlin, December 2008. Supervisor: A. Knorr and T. Brandes.
Q-Sanov Theorem for d 2.
German Diplom in Mathematics, Institute for Mathematics, TU-Berlin, August 2007. Supervisor: R. Seiler and J.-D. Deuschel.


Math Machine Learning Group


Postdocs

Rishi Sonthalia, Hedrick Assistant Adjunct Professor, UCLA, co-mentored with A. Bertozzi and J. Foster
Pradeep Kr. Banerjee, Postdoc, MPI MIS
Katerina Papagiannouli, Postdoc, MPI MIS
Jing An, Postdoc, MPI MIS, co-mentored with F. Otto
PhD students

Hui Jin, PhD candidate, UCLA
Benjamin Bowman, PhD candidate, UCLA
Jiayi Li, PhD candidate, UCLA
Johannes Müller, PhD candidate, IMPRS, MPI MIS, co-advised with Nihat Ay
Hanna Tseran, PhD candidate, IMPRS, MPI MIS
Pierre Brechet, PhD candidate, MPI MIS
Past Postdocs

Yu Guang Wang, 2020-21 Postdoc at MPI MIS, next position Associate Professor at Shanghai Jiao Tong University
Nina Otter, 2018-21 CAM Adjunct Assistant Professor at UCLA co-mentored with M. Porter, next position Lecturer (Assistant Professor) of Data Science at Queen Mary University of London
Quynh Nguyen, 2020-21 Postdoc at MPI MIS
Past PhD students

Yonatan Dukler, PhD 2021 at UCLA Math co-advised with A. Bertozzi, next position Applied Scientist at AWS


Teaching


Math 164 - Optimization, UCLA, Spring 2022
Stats 290 - Current Literature in Statistics, UCLA, Spring 2022
Math 273A - Optimization, UCLA, Fall 2021
Stats 100A - Introduction to Probability, UCLA, Fall 2021
Stats 231C - Theories of Machine Learning, UCLA, Spring 2021
Stats 200A - Applied Probability, UCLA, Fall 2020
Math 273A - Optimization and Calculus of Variations, UCLA, Fall 2020
Stats 231C - Theories of Machine Learning, UCLA, Spring 2020
Math 285J - Applied Mathematics Seminar - Deep Learning Topics, UCLA, Fall 2019
Stats 200A - Applied Probability, UCLA, Fall 2019
Stats 231C - Theories of Machine Learning, UCLA, Spring 2019
IMPRS Ringvorlesung - short course, Topics from Deep Learning, MPI MIS, Winter 2019
Math 273 - Optimization, Calculus of Variations, and Control Theory, UCLA, Fall 2018
Math 164 - Optimization, UCLA, Fall 2018.
Stat 270 - Mathematical Machine Learning, UCLA, Spring 2018
Math 285J - Applied Mathematics Seminar - Deep Learning Topics, UCLA, Winter 2018
Introduction to the Theory of Neural Networks, MS/PhD Lecture, Leipzig University and MPI MIS, Summer Term 2016
Geometric Aspects of Graphical Models and Neural Networks, with N. Ay, [Abstract], MS/PhD Lecture, Leipzig University and MPI MIS, Winter Term 2014/2015


Talks


2021

Wilhelm Killing Colloquium, Mathematisches Institut, Universitaet Muenster, Muenster, Germany, December 2021. (online)
Mathematics of Information Processing Seminar, RWTH Aachen University, Aachen, Germany, November 2021. (online)
Mathematics in Imaging, Data and Optimization, Department of Mathematics, Rensselaer Polytechnic Institute, Troy, New York, November 2021. (online)
Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China, October 2021. (online)
Mathematics of Deep Learning, Isaac Newton Institute, Cambridge, August 2021. (online)
Workshop - Sayan Mukherjee, MPI MIS, July 2021.
Statistics Department Seminar, Department of Statistics, UCLA, June 2021. (online)
Discrete Mathematics/Geometry Seminar, TU Berlin, May 2021. (online)
Mathematical Data Science Seminar, Department of Mathematics, Purdue University, March 2021. (online)
DeepMind/ELLIS CSML Seminar Series, Centre for Computational Statistics and Machine Learning, University College London (UCL), January 2021. (online)
Invited talk at the TRIPODS Winter School and Workshop on the Foundations of Graph and Deep Learning, Mathematical Institute for Data Science (MINDS) at Johns Hopkins University, Baltimore MD, January 2021. (online)
Biostatistics Winter Webinar Series, Department of Biostatistics, UCLA, January 2021. (online)
Applied and Computational Mathematics Seminar, UC Irvine, January 2021. (online)
2020

Mathematics of Data and Decision in Davis (MADDD), UC Davis, December 2020. (online)
Invited talk at GAMM Workshop Computational and Mathematical Methods in Data Science, (Gesellschaft fuer Angewandte Mathematik und Mechanik e.V.), Max Planck Institute MIS, September 2020. (online)
Plenary talk at the Workshop Optimal Transport, Topological Data Analysis and Applications to Shape and Machine Learning, Mathematical Biosciences Institute, Ohio State University, USA, July 2020. (online)
Keynote at Algebraic Statistics in Hawaii, USA, June 2020. (online)
Mathematics Seminar, KAUST, Thuwal, Saudi Arabia, January 2020.
2019

Wasserstein Regularization for Generative and Discriminative Learning, UseDat Conf, Infospace, Moscow, September 2019.
Invited talk Factorized mutual information maximization at Prague Stochastics, Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague, August 2019.
Wasserstein Information Geometry for Learning from Data, Optimal Transport for Nonlinear Problems, ICIAM, Valencia, Spain, July 2019.
Markov Kernels with Deep Graphical Models, Latent Graphical Models, SIAM AG, Bern, July 2019.
Keynote Computing the Unique Information at 1st Workshop on Semantic Information, CVPR 2019, Long Beach, June 2019.
Tutorial Wasserstein Information Geometry for Learning from Data at Geometry and Learning from Data in 3D and Beyond, IPAM, LA, March 2019. [talk at GLTUT IPAM].
2018

RBM Intro and Review at Boltzmann Machines, AIM, October 2018.
Plenary talk Representation, Approximation, Optimization advances for Restricted Botlzmann Machines at 7th International conference on computational harmonic analysis, Vanderbilt University, May 2018.
Tutorial at Transilvanian Machine Learning Summer School, Cluj-Napoca, Romania, 16-22 July 2018.
Mixtures and products in two graphical models, USC Probability and Statistics Seminar, USC, April 2018.
A theory of cheap control in embodied systems, Random Structures in Neuroscience and Biology, Herrsching, Germany, March 2018.
Mode poset probability polytopes, Combinatorics Seminar (Igor Pak), UCLA, February 2018.
Uncertainty and Stochasticity in POMDPs, Machine Learning Seminar (Wilfrid Gangbo), UCLA, February 2018.
2017

Mixtures and products in two graphical models, Level Set Collective (Stanley Osher), IPAM, November 2017.
Graphical models with hidden variables, Systematic Approaches to Deep Learning Methods for Audio, Vienna, Austria, September 2017.
On the Fisher metric of conditional probability polytopes, Geometry of Information for Neural Networks, Machine Learning, Artificial Intelligence, Topological and Geometrical Structures of Information, CIRM Marseille, France, August 2017.
Notes on the number of linear regions of deep neural networks, Mathematics of Deep Learning, Special Session at International Conference on Sampling Theory, Tallin, Estonia, July 2017.
Neural networks for cheap control of embodied behavior, Peking University (Jinchao Xu), Beijing, China, July 2017.
Selected Topics in Deep Learning, Short Course, Beijing Institute for Scientific Computing, Beijing, China, July 2017.
Learning with neural networks, Tutorial, Training Networks, Summer School, Signal Processing with Adaptive Sparse Structured Representations, Lisbon, Portugal, June 2017.
2016

Dimension of Marginals of Kronecker Product Models, Seminar on Non-Linear Algebra, TU-Berlin, Germany, November 2016.
Geometric and Combinatorial Perspectives on Deep Neural Networks, Theory of Deep Learning Workshop, ICML 2016, New York, USA, June 2016.
Plenary talk Geometry of Boltzmann Machines, [Slides], [Abstract], IGAIA IV, Liblice, Czech Republic, June 2016.
Geometric Approaches to the Design of Embodied Learning Systems, Special Symposium on Intelligent Systems, MPI for Intelligent Systems, Tuebingen, Germany, March 2016.
Artificial Intelligence Overview, LikBez Seminar, MPI MIS, January 2016.
2015

Poster: A comparison of neural network architectures, Deep Learning Workshop, ICML 2015.
Poster: Mode Poset Probability Polytopes, [pdf], Algebraic Statistics 2015, Department of Mathematics University of Genoa, Italy, June 8-11, 2015.
Poster: Deep Narrow Boltzmann Machines are Universal Approximators, [pdf], ICLR 2015.
A Theory of Cheap Control in Embodied Systems, Montreal Institute for Learning Algorithms (MILA), University of Montreal, Canada, December 2015.
Dimension of restricted Boltzmann machines, Department of Mathematics & Statistics, York University, Toronto, Canada, December 2015.
Sequential Recurrence-Based Multidimensional Universal Source Coding, Dynamical Systems Seminar, MPI MIS, November 2015.
Cheap Control of Embodied Systems, Aalto Science Institute, Espoo, Finland, November 2015.
Mode Poset Probability Polytopes, WUPES'15, Moninec, Czech Republic, September 18, 2015.
Hierarchical models as marginals of hierarchical models, WUPES'15, Moninec, Czech Republic, September 17, 2015.
Confining bipartite graphical models by simple classes of inequalities, Special Topics Session Algebraic and Geometric Approaches to Graphical Models, 60th World Statistics Congress - ISI 2015, Rio de Janeiro, Brazil, July 31, 2015.
2014

Poster: On the Number of Linear Regions of Deep Neural Networks, [pdf], NIPS 2014.
Poster: A Framework for Cheap Universal Approximation in Embodied Systems, Autonomous Learning: 3. Symposium DFG Priority Programme 1527, Berlin, September 8-9, 2014.
Poster: Geometry of hidden-visible products of statistical models, [pdf], Algebraic Statistics at IIT, Chicago, IL, 2014.
On the Number of Linear Regions of Deep Neural Networks, Montreal Institute for Learning Algorithms (MILA), Université de Montréal, Montreal, Canada, December 15, 2014.
Information Divergence from Statistical Models Defined by Neural Networks, Workshop: Information Geometry for Machine Learning, RIKEN BSI, Japan, December 2014.
Geometry of Deep Neural Networks and Cheap Design for Autonomous Learning, Google DeepMind, London, UK, October 2014.
Geometry of Hidden-Visible Products of Statistical Models, Joint Workshop on Limit Theorems and Algebraic Statistics, UTIA, Prague, August 25-29, 2014.
2013

How size and architecture determine the learning capacity of neural networks, SFI Seminar, Santa Fe, NM, USA, October 23, 2013.
Maximal Information Divergence from Statistical Models defined by Neural Networks, GSI 2013, Mines ParisTech, Paris, France, August 29, 2013.
Naive Bayes models, Seminario de Postgrado en Ingenieria de Sistemas, Universidad del Valle, Santiago de Cali, Colombia, May 30, 2013.
Discrete Restricted Boltzmann Machines, ICLR2013, Scottsdale, AZ, USA, May 2, 2013.
2012

Poster: When Does a Mixture of Products Contain a Product of Mixtures, [Abstract], NIPS 2012 - Deep Learning and Unsupervised Feature Learning Workshop.
Poster: Kernels and Submodels of Deep Belief Networks, [Abstract], NIPS 2012 - Deep Learning and Unsupervised Feature Learning Workshop.
When Does a Mixture of Products Contain a Product of Mixtures?, Tensor network states and algebraic geometry, ISI Foundation, Torino, Italy, November 06-08, 2012.
Universally typical sets for ergodic sources of multidimensional data, Seminar on probability and its applications (Manfred Denker), Penn State, PA, USA, October 05, 2012.
On the Expressive Power of Discrete Mixture Models, Restricted Boltzmann Machines, and Deep Belief Networks—A Unified Mathematical Treatment, PhD thesis defense, Leipzig University, October 17, 2012.
Scaling of model approximation errors and expected entropy distances, Stochastic Modelling and Computational Statistics Seminar (Murali Haran), Penn State, PA, USA, October 11, 2012.
Scaling of Model Approximation Errors and Expected Entropy Distances, WUPES'12, Mariánské Lázně, Czech Republic, September 13, 2012.
Multivalued Restricted Boltzmann Machines, [Abstract], MPI MIS, Leipzig, Germany, September 19, 2012.
Simplex packings of marginal polytopes and mixtures of exponential families, SIAM Conference on Discrete Mathematics (DM 2012), Dalhousie University, Halifax, Nova Scotia, Canada, June 18-21, 2012.
On Secants of Exponential Families, Algebraic Statistics in the Alleghenies, Penn State, PA, USA, June 08-15, 2012.
Approximation Errors of Deep Belief Networks, Applied Algebraic Statistics Seminar, Penn State, PA, USA, February 08, 2012.
2011

Submodels of Deep Belief Networks, [Abstract], Berkeley Algebraic Statistics Seminar, UC Berkeley, CA, USA, December 07, 2011.
Geometry and Approximation Errors of Restricted Boltzmann Machines, The 5th Statistical Machine Learning Seminar, Institute of Statistical Mathematics, Tachikawa, Tokyo, Japan, September 02, 2011.
Geometry of Restricted Boltzmann Machines Towards Geometry of Deep Belief Networks, RIKEN Workshop on Information Geometry, RIKEN BSI, Japan, August 31, 2011.
Selection Criteria for Neuromanifolds of Stochastic Dynamics, The 3rd International Conference on Cognitive Neurodynamics, Niseko Village, Hokkaido, Japan, June 12, 2011.
On Exponential Families and the Expressive Power of Related Generative Models, [Abstract], Laboratoire d'Informatique des Systèmes Adaptatifs (LISA), Université de Montréal, Canada, March 14, 2011.
Mixtures from Exponential Families, Neuronale Netze und Kognitive Systeme Seminar, MPI MIS, Leipzig, Germany, March 02, 2011.
Universal approximation results for Restricted Boltzmann Machines and Deep Belief Networks, Neuronale Netze und Kognitive Systeme Seminar, MPI MIS, Leipzig, Germany, February 16, 2011.
Necessary conditions for RBM universal approximators, Meeting of the Department of Decision-Making Theory - Institute of Information Theory and Automation UTIA, Marianska, Czech Republic, January 18, 2011.
2010

Poster: Mixture Models and Representational Power of RBMs, DBNs and DBMs, NIPS 2010 - Deep Learning and Unsupervised Feature Learning Workshop, Whistler, Canada.
Poster: Faces of the probability simplex contained in the closure of an exponential family and minimal mixture representations, Information Geometry and its Applications III, Leipzig, Germany, 2010.
Information Geometry of Mean-Field Methods, Fall School on Statistical Mechanics and 5th annual PhD Student Conference in Probability, MPI MIS, Leipzig, Germany, September 07-12, 2009.
Quantum-Sanov-Theorem for correlated States in multidimensional Grids, Dies Mathematicus, TU-Berlin, Germany, February 2008.
Quanten-Sanov-Theorem im mehrdimensionallen Fall, Workshop on Complexity and Information Theory, MPI MIS, Leipzig, Germany, October 2007.


Activities


We had the Reunion Conference of the IPAM Program Geometry and Learning from Data in 3D and Beyond at Lake Arrowhead, December 2021.
We are excited to be part of the Priority Programme Theoretical Foundations of Deep Learning (SPP 2298) with a project on Combinatorial and implicit approaches to deep learning.
Together with Pablo Suarez Serrato, Minh Ha Quang, Rongjie Lai we are organizing the BIRS-CMO Workshop Geometry and Learning from Data, online, October 2021.
Together with Benjamin Gess and Nihat Ay we are organizing the ZiF Conference on Mathematics of Machine Learning, Bielefeld, August 2021.
I am participating at the Mathematics of Deep Learning Program, Isaac Newton Institute for Mathematical Sciences, Cambridge, UK, Jul-Dec 2021.
Starting in June 2021 I will be serving as a research mentor at the year-long Latinx Mathematicians Research Community, AIM, 2021.
Optimal transport in the natural sciences, Mathematisches Forschungsinstitut Oberwolfach (MFO), February 2021.
Together with Wuchen Li we are organizing the Wasserstein Information Geometry special session at GSI 2019, Tolouse, France, August 2019.
I am participating at the National Workshop on Data Science Education, UC Berkeley, CA, USA, June 2019.
IST Workshop on Deep Learning Theory, IST, Vienna, Austria, September 2019
Together with Joan Bruna, Yu Guang Wang, Nina Otter, and Zheng Ma, we had the ICERM Collaborate Group Geometry of Data and Networks, Institute for Computational and Experimental Research in Mathematics, Providence, RI, USA, June 2019.
I am a co-organizer of the Geometry of Data and Learning in 3D and Beyond, IPAM Long Program, Institute for Pure and Applied Mathematics, Los Angeles, CA, USA, March - June 2019.
We had a fantastic Deep Learning Theory Kickoff Meeting at the Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany, March 2019.
DALI, January 2019.
Asja Fischer, Jason Morton, and I are organizing the AIM Workshop Boltzmann Machines, American Institute of Mathematics, San Jose, CA, USA, September 2018.
With Asja Fischer I am organizing the Theory of Deep Learning Workshop at DALI 2018, Lanzarote, Spain, April 2018.
Latinx in the Mathematical Sciences Conference, IPAM, March 2018.
Together with Christiane Goergen, Nihat Ay, and Andre Uschmajew, I am a co-initiator of the Math of Data Initiative, Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany.
NIPS 2017, Long Beach, CA, December 2017.
Geometric Science of Information, Paris, November 2017.
ICML 2017, Principled Approaches to Deep Learning, Program Committee, Sydney, Australia, August 2017.
Oberwolfach Workshop Algebraic Statistics, Mathematisches Forschungsinstitut Oberwolfach, Germany, April 2017.
Santa Fe Institute, Visit for Research Collaboration (Nihat Ay), Santa Fe, NM, USA, October 2016.
NIPS 2015, Montréal, Canada.
Santa Fe Institute, Visit for Research Collaboration (Nihat Ay), Santa Fe, NM, USA, October 15-November 15, 2014.
Information Geometry in Learning and Optimization, University of Copenhagen, September 22-26, 2014.
Autonomous Learning: 3. Symposium DFG Priority Programme 1527, Magnus Haus Berlin, Germany, September 08-09, 2014.
Autonomous Learning: Summer School, MPI MIS, September 01-04, 2014.
Santa Fe Institute, Visit for Research Collaboration (Nihat Ay), October 1-27, 2013.
SFI Working Group ``Information Theory of Sensorimotor Loops'', Santa Fe Institute, Santa Fe, NM, USA, October 8-11, 2013.
Pennsylvania State University, Visit for Research Collaboration (Jason Morton), PA, USA, September 2013.
Algebraic Statistics in Europe, IST Austria, September 28-30, 2012.
Singular Learning Theory, AIM Workshop, American Institute of Mathematics, Palo Alto, CA, USA, December 12-16, 2011.
RIKEN-BSI, Laboratory for Mathematical Neuroscience (Prof. S. Amari), Internship, Hirosawa, Wako, Saitama, Japan, August-October 2011.
SFI Complex Systems Summer School (CSSS11), Saint John's College, Santa Fe, NM, USA, June 8-July 1, 2011.
Information Geometry and its Applications (IGAIA III), Leipzig University, Germany, August 2010.


Last updated: 10/2021