Keynotes
Federated Self-supervised Learning
The current federated learning (FL) applications predominantly focus on supervised learning tasks, which demands high-quality, domain-specific labels that are often unavailable at the edge. The integration of self-supervised learning (SSL) with FL offers a promising solution by enabling the extraction of useful representations from unlabelled data, thereby expanding FL’s applicability to real-world scenarios where label scarcity is an issue. This talk will primarily introduce SSL model training within FL environments across three key domains: speech, video, and image. I will walk through the main challenges and potential solutions for FL-SSL in each domain. For speech, I will present a systematic study on the feasibility of implementing speech SSL in FL, addressing both hardware limitations and algorithmic challenges. For video SSL, I will introduce groundbreaking preliminary studies through a novel FL framework that integrates stochastic weighted averaging and partial weights updating. Lastly, I will dive into the issue of model divergence in image-federated SSL, presenting a new aggregation scheme that uses angular divergence to weight client models effectively.
DR. YAN GAO (Flower Labs)
Yan Gao is a Research Scientist at Flower Labs and Adjunct Researcher at the University of Cambridge, where his work is at the forefront of federated learning innovation. Prior to this role, he completed his PhD at the University of Cambridge under the supervision of Professor Nicholas Lane, within the Machine Learning System Lab. His research interests include machine learning, federated learning, self-supervised learning, and optimisation techniques. Throughout his doctoral studies, He focused on pioneering research in federated self-supervised learning, specifically targeting the challenge of working with unlabelled data across diverse domains such as audio, image, and video. This groundbreaking work has been recognised and published in several top-tier international conferences and journals, including ICCV, ECCV, ICLR, INTERSPEECH, ICCASP, IMWUT, and JMLR, marking significant contributions to the field of federated learning and its applications.
Since its inception in 2016, Federated Learning (FL) has been gaining tremendous popularity in the machine learning community. Several frameworks have been proposed to facilitate the development of FL algorithms, but researchers often resort to implementing their algorithms from scratch, including all baselines and experiments. This is because existing frameworks are not flexible enough to support their needs or the learning curve to extend them is too steep. We present fluke, a Python package designed to simplify the development of new FL algorithms. fluke is specifically designed for prototyping purposes and is meant for researchers or practitioners focusing on the learning components of a federated system. fluke is open-source, and it can be either used out of the box or extended with new algorithms with minimal overhead.
Dr. Mirko polato (University of Turin)
Mirko Polato is an Assistant Professor at the Department of Computer Science at the University of Turin, Italy. He earned his MSc and Ph.D. in Brain, Mind, and Computer Science from the University of Padova in 2013 and 2018. In 2017, he was a visiting Ph.D. student at Delft University of Technology. From 2018 to 2021, he was a post-doctoral fellow at the University of Padova, working on H2020 projects. His research focuses on Federated Learning, interpretable machine learning, and recommender systems. He has organized several workshops and sessions on Federated Learning and has authored around 50 research publications.
More information about Mirko can be found on his homepage https://makgyver.github.io.
Provably private learning on federated data for Large Language Models and more. (rEMOTE PRESENTATION)
This talk will begin by highlighting progress at Google on cross-device federating learning over the past few years, including how FL works with other privacy technologies like differential privacy and secure multi-party computation to support a broad set of privacy principles. We will then turn to an exciting set of new challenges and questions: Can we scale to more applications? Can we scale to much larger models like LLMs? Can we prove that only certain kinds of computations are allowed to run server-side? Can we build systems that are robust to Sibyl attacks?
Dr. BRENDAN MCMAHAN (GOOGLE)
Brendan McMahan received a Ph.D. in Computer Science from Carnegie Mellon University, USA. He is now Principal Research Scientist at Google, leading efforts on decentralized and privacy-preserving machine learning efforts.
His team pioneered the concept of federated learning and continues to push the boundaries of what is possible when working with decentralized data using privacy-preserving techniques. Previously, he has worked in online learning, large-scale convex optimization, and reinforcement learning.
Additional details on Brendan's activities are available on his homepage.