Memory Efficient Federated Recommendation Model

Abstract

Federated Learning (FL) is rapidly gaining traction in the field of Machine Learning (ML) to train models collaboratively in a decentralized fashion due to increasing privacy concerns. One domain where FL shows promising improvements over centralized ML solutions is in developing Recommendation Systems (RS), as they can learn faster to provide better recommendations to new users and for new items without taking data out of user devices. Training FL based RS on edge devices at large-scale comes with its own challenges where the numbers of users and items are indefinitely large. This paper introduces a novel model framework for FL based RS which employs a fixed size encoding scheme for items and users, which is independent of number of items/users, therefore memory efficient and well suited for RS in large-scale applications. The proposed framework is explored through a use-case of Next App Recommendation, which is shown to have comparable recommendation performance with much lower memory consumption compared with SOTA solutions.

Publication
In 2022 IEEE 16th International Conference on Semantic Computing (ICSC)
Mrinaal Dogra
Mrinaal Dogra
MS in Computer Science

My research interests include artificial intelligence, human-machine interaction, and robotics