We propose a novel federated learning approach that allows multiple communication rounds per cohort, achieving up to 74% reduction in total communication costs through a new stochastic proximal point method variant.
Jun 1, 2024
This work introduces novel methods combining random reshuffling with gradient compression for distributed and federated learning, providing theoretical analysis and practical improvements over existing approaches.
Jun 14, 2022