Privacy-preserving distributed projection LMS for linear multitask networks

We develop a privacy-preserving distributed projection least mean squares (LMS) strategy over linear multitask networks, where agents' local parameters of interest or tasks are linearly related. Each agent is interested in not only improving its local inference performance via in-network cooper...

Full description

Saved in:
Bibliographic Details
Main Authors: Wang, Chengcheng, Tay, Wee Peng, Wei, Ye, Wang, Yuan
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/156347
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:We develop a privacy-preserving distributed projection least mean squares (LMS) strategy over linear multitask networks, where agents' local parameters of interest or tasks are linearly related. Each agent is interested in not only improving its local inference performance via in-network cooperation with neighboring agents, but also protecting its own individual task against privacy leakage. In our proposed strategy, at each time instant, each agent sends a noisy estimate, which is its local intermediate estimate corrupted by a zero-mean additive noise, to its neighboring agents. We derive a sufficient condition to determine the amount of noise to add to each agent's intermediate estimate to achieve an optimal trade-off between the network mean-square-deviation and an inference privacy constraint. We propose a distributed and adaptive strategy to compute the additive noise powers, and study the mean and mean-square behaviors and privacy-preserving performance of the proposed strategy. Simulation results demonstrate that our strategy is able to balance the trade-off between estimation accuracy and privacy preservation.