About

akregeb at gmail dot com

Welcome to my tiny corner of the internet! I’m Ahmed, I work on optimization and machine learning. I have a Bachelor’s degree in Computer Engineering from Cairo University, Egypt. I’m (hopefully) going to join Princeton’s ECE department as a Ph.D. student starting next year.

I was fortunate to intern in the group of Prof. Peter Richtárik at KAUST in the summers of 2019/2020, where I worked on the distributed & stochastic optimization. Prior to this, I did some (applied) research on accelerating the training of neural networks with Prof. Amir Atiya.

Papers

(In reverse order of preparation)

Proximal and Federated Random Reshuffling
Preprint (2021), with Konstantin Mishchenko and Peter Richtárik. (bibtex).
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
Preprint (2020), with Othmane Sebbouh, Nicolas Loizou, Robert M. Gower, and Peter Richtárik. (bibtex).
Random Reshuffling: Simple Analysis with Vast Improvements
Advances in Neural Information Processing Systems 33 (NeurIPS 2020), with Konstantin Mishchenko and Peter Richtárik. (bibtex).
Better Theory for SGD in the Nonconvex World
Preprint (2020), with Peter Richtárik. (bibtex).
Distributed Fixed Point Methods with Compressed Iterates
Preprint (2019), with Sélim Chraibi, Dmitry Kovalev, Peter Richtárik, Adil Salim, and Martin Takáč. (bibtex).
Tighter Theory for Local SGD on Identical and Heterogeneous Data
The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020, with Konstantin Mishschenko and Peter Richtárik. (bibtex). Extends the workshop papers (a, b) below.
Better Communication Complexity for Local SGD
Oral presentation at the NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, with Konstantin Mishschenko and Peter Richtárik. (bibtex).
First Analysis of Local GD on Heterogenous Data
NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, with Konstantin Mishschenko and Peter Richtárik. (bibtex).
Gradient Descent with Compressed Iterates
NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, with Peter Richtárik. (bibtex).
Applying Fast Matrix Multiplication to Neural Networks
The 35th ACM/SIGAPP Symposium On Applied Computing (ACM SAC) 2020, with Amir F. Atiya and Ahmed H. Abdel-Gawad. (bibtex).

Talks

On the Convergence of Local SGD on Identical and Heterogeneous Data
Federated Learning One World Seminar (2020). Video and Slides.