We propose a novel federated learning approach that allows multiple communication rounds per cohort, achieving up to 74% reduction in total communication costs through a new stochastic proximal point method variant.
Jun 1, 2024
We provide a unified theoretical framework for analyzing SGD with biased gradient estimators, establishing connections between existing assumptions and introducing new weaker conditions. Presented as a Poster at NeurIPS 2023.
May 25, 2023
This work introduces novel methods combining random reshuffling with gradient compression for distributed and federated learning, providing theoretical analysis and practical improvements over existing approaches.
Jun 14, 2022
We introduce three point compressors (3PC), a novel framework unifying and improving communication-efficient distributed optimization methods. Presented as a Poster at ICML 2022.
Feb 2, 2022
We propose EF21, a novel approach to error feedback offering a better theoretical rate and strong empirical results. Presented as an Oral + Poster at NeurIPS 2021.
Jun 9, 2021