About me

My name is Avetik Karagulyan. I am a Research Scientist at CNRS/L2S. Previously, I was a PostDoctoral fellow at KAUST in the team of professor Peter Richtárik. I have defended my thesis at Center of Research in Economics and STatistics (CREST), Paris under the supervision of professor Arnak Dalalyan. In 2018, I received my MSc Mathematics, Vision, Learning (MVA) diploma at ENS Paris-Saclay with highest honors (mention "très bien"). I graduated from Yerevan State University's faculty of Mathematics and Mechanics in 2017 with excellence.
My research focuses on the study of different methods of sampling and their connections to optimization.

News

2024


Position at CNRS

I will join Laboratoire des Signaux et Systèmes (L2S) at Centrale Supélec as a Research Scientist (Chargé de Recherche) from Centre Nationale de la Recherche Scientifique (CNRS).

New paper on federated learning

"SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning" is on arXiv. It is a joint work with Egor Shulgin, Abdurakhmon Sadiev and Peter Richtárik.

Two papers accepted to ICLR!


2023


New paper on matrix stepsized non-convex optimization

"MARINA Meets Matrix Stepsizes: Variance Reduced Distributed Non-Convex Optimization" is on arXiv. It is a joint work with Hanmin Li, and Peter Richtárik.

Our paper on federated sampling got accepted to the FL workshop at ICML23

Our paper got accepted to "Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities" workshop at ICML.

Was invited to give a talk in Armenia

Presented our federated sampling paper at Summer School "Statistics and Learning Theory" .

Participated in the 3rd "Mathematics in Armenia" conference

Presented our federated sampling paper at the 3rd conference "Mathematics in Armenia" in Yerevan.

Visiting MSR, Redmond

Was visiting Adil Salim at Microsoft Research, Redmond.

New paper on matrix stepsized non-convex optimization

"Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization" Hanmin Li, and Peter Richtárik.

New paper on Langevin sampling

"Langevin Monte Carlo for strongly log-concave distributions: Randomized midpoint revisited" is on arXiv. It is a joint work with Lu Yu, and Arnak Dalalyan.

Paper accepted to AISTATS 2023

Our paper "Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition" got accepted to AISTATS 2023.

New paper on federated sampling

"ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression" a joint work with Peter Richtárik.

I am co-organizing a mini-symposium at SIAM OP23

We organize a mini-symposium called "Wasserstein gradient flows and applications" at SIAM OP23 with Anna Korba and Adil Salim.

2022


Invited virtual talk

I was invited to give a virtual talk for "Quantitative Methods Area Seminar" at Krannert School of Management, Purdue University.

Our paper got accepted to JMLR

The paper "Bounding the error of discretized Langevin algorithms for non-strongly log-concave targets" by Arnak Dalalyan, me and Lionel Riou-Durand, got accepted to the Journal of Machine Learning Research.

Summer conferences

I presented a poster for the paper "Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition" in "Conference on the Mathematics for Complex Data", International Symposium on Non-parametric Statistics and "Statistical Physics and Machine Learning".

New paper

"Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition" is available on ArXiv. It is a joint work with Lukang Sun, and Peter Richtárik.