Online contextual influence maximization with costly observations

In the online contextual influence maximization problem with costly observations, the learner faces a series of epochs in each of which a different influence spread process takes place over a network. At the beginning of each epoch, the learner exogenously influences (activates) a set of seed nodes...

Full description

Saved in:
Bibliographic Details
Main Authors: SARITAC, Omer, KARAKURT, Altug, TEKIN, Cem
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2019
Subjects:
Online Access:https://ink.library.smu.edu.sg/lkcsb_research/7602
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.lkcsb_research-8601
record_format dspace
spelling sg-smu-ink.lkcsb_research-86012024-10-08T03:57:07Z Online contextual influence maximization with costly observations SARITAC, Omer KARAKURT, Altug TEKIN, Cem In the online contextual influence maximization problem with costly observations, the learner faces a series of epochs in each of which a different influence spread process takes place over a network. At the beginning of each epoch, the learner exogenously influences (activates) a set of seed nodes in the network. Then, the influence spread process takes place over the network, through which other nodes get influenced. The learner has the option to observe the spread of influence by paying an observation cost. The goal of the learner is to maximize its cumulative reward, which is defined as the expected total number of influenced nodes over all epochs minus the observation costs. We depart from the prior work in three aspects: 1) the learner does not know how the influence spreads over the network, i.e., it is unaware of the influence probabilities; 2) influence probabilities depend on the context; and 3) observing influence is costly. We consider two different influence observation settings: costly edge-level feedback, in which the learner freely observes the set of influenced nodes, but pays to observe the influence outcomes on the edges of the network; and costly node-level feedback, in which the learner pays to observe whether a node is influenced or not. Since the offline influence maximization problem itself is NP-hard, for these settings, we develop online learning algorithms that use an approximation algorithm as a subroutine to obtain the set of seed nodes in each epoch. When the influence probabilities are Hölder continuous functions of the context, we prove that these algorithms achieve sublinear regret (for any sequence of contexts) with respect to an approximation oracle that knows the influence probabilities for all contexts. Our numerical results on several networks illustrate that the proposed algorithms perform on par with the state-of-the-art methods even when the observations are cost free. 2019-06-30T07:00:00Z text https://ink.library.smu.edu.sg/lkcsb_research/7602 info:doi/10.1109/TSIPN.2018.2866334 Research Collection Lee Kong Chian School Of Business eng Institutional Knowledge at Singapore Management University Influence maximization combinatorial bandits social networks approximation algorithms costly observations regret bounds Operations and Supply Chain Management Theory and Algorithms
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Influence maximization
combinatorial bandits
social networks
approximation algorithms
costly observations
regret bounds
Operations and Supply Chain Management
Theory and Algorithms
spellingShingle Influence maximization
combinatorial bandits
social networks
approximation algorithms
costly observations
regret bounds
Operations and Supply Chain Management
Theory and Algorithms
SARITAC, Omer
KARAKURT, Altug
TEKIN, Cem
Online contextual influence maximization with costly observations
description In the online contextual influence maximization problem with costly observations, the learner faces a series of epochs in each of which a different influence spread process takes place over a network. At the beginning of each epoch, the learner exogenously influences (activates) a set of seed nodes in the network. Then, the influence spread process takes place over the network, through which other nodes get influenced. The learner has the option to observe the spread of influence by paying an observation cost. The goal of the learner is to maximize its cumulative reward, which is defined as the expected total number of influenced nodes over all epochs minus the observation costs. We depart from the prior work in three aspects: 1) the learner does not know how the influence spreads over the network, i.e., it is unaware of the influence probabilities; 2) influence probabilities depend on the context; and 3) observing influence is costly. We consider two different influence observation settings: costly edge-level feedback, in which the learner freely observes the set of influenced nodes, but pays to observe the influence outcomes on the edges of the network; and costly node-level feedback, in which the learner pays to observe whether a node is influenced or not. Since the offline influence maximization problem itself is NP-hard, for these settings, we develop online learning algorithms that use an approximation algorithm as a subroutine to obtain the set of seed nodes in each epoch. When the influence probabilities are Hölder continuous functions of the context, we prove that these algorithms achieve sublinear regret (for any sequence of contexts) with respect to an approximation oracle that knows the influence probabilities for all contexts. Our numerical results on several networks illustrate that the proposed algorithms perform on par with the state-of-the-art methods even when the observations are cost free.
format text
author SARITAC, Omer
KARAKURT, Altug
TEKIN, Cem
author_facet SARITAC, Omer
KARAKURT, Altug
TEKIN, Cem
author_sort SARITAC, Omer
title Online contextual influence maximization with costly observations
title_short Online contextual influence maximization with costly observations
title_full Online contextual influence maximization with costly observations
title_fullStr Online contextual influence maximization with costly observations
title_full_unstemmed Online contextual influence maximization with costly observations
title_sort online contextual influence maximization with costly observations
publisher Institutional Knowledge at Singapore Management University
publishDate 2019
url https://ink.library.smu.edu.sg/lkcsb_research/7602
_version_ 1814047905213841408