Bayesian dithering for learning: Asymptotically optimal policies in dynamic pricing
We consider a dynamic pricing and learning problem where a seller prices multiple products and learns from sales data about unknown demand. We study the parametric demand model in a Bayesian setting. To avoid the classical problem of incomplete learning, we propose dithering policies under which pri...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/lkcsb_research/7312 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.lkcsb_research-8311 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.lkcsb_research-83112023-10-26T01:42:05Z Bayesian dithering for learning: Asymptotically optimal policies in dynamic pricing HUH, Woonghee Tim KIM, Michael Jong LIN, Meichun We consider a dynamic pricing and learning problem where a seller prices multiple products and learns from sales data about unknown demand. We study the parametric demand model in a Bayesian setting. To avoid the classical problem of incomplete learning, we propose dithering policies under which prices are probabilistically selected in a neighborhood surrounding the myopic optimal price. By analyzing the effect of dithering in facilitating learning, we establish regret upper bounds for three typical settings of demand model. We show that the dithering policy achieves an upper bound of order logT when the parameter set is finite. It can be modified to achieve a constant regret bound under an additional assumption. We also prove an upper bound of order √TlogT when the parameter set is compact and convex. Each bound matches (up to a logarithmic factor) the existing lower bound of any pricing policy. In this way, we show that dithering policies achieve asymptotically optimal performance in three different parameter settings, which demonstrates dithering as a unified approach to strike the balance between exploration and exploitation. 2022-07-12T07:00:00Z text https://ink.library.smu.edu.sg/lkcsb_research/7312 info:doi/10.1111/poms.13786 Research Collection Lee Kong Chian School Of Business eng Institutional Knowledge at Singapore Management University Bayesian learning dynamic pricing exploration-exploitation regret analysis Operations and Supply Chain Management Operations Research, Systems Engineering and Industrial Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Bayesian learning dynamic pricing exploration-exploitation regret analysis Operations and Supply Chain Management Operations Research, Systems Engineering and Industrial Engineering |
spellingShingle |
Bayesian learning dynamic pricing exploration-exploitation regret analysis Operations and Supply Chain Management Operations Research, Systems Engineering and Industrial Engineering HUH, Woonghee Tim KIM, Michael Jong LIN, Meichun Bayesian dithering for learning: Asymptotically optimal policies in dynamic pricing |
description |
We consider a dynamic pricing and learning problem where a seller prices multiple products and learns from sales data about unknown demand. We study the parametric demand model in a Bayesian setting. To avoid the classical problem of incomplete learning, we propose dithering policies under which prices are probabilistically selected in a neighborhood surrounding the myopic optimal price. By analyzing the effect of dithering in facilitating learning, we establish regret upper bounds for three typical settings of demand model. We show that the dithering policy achieves an upper bound of order logT when the parameter set is finite. It can be modified to achieve a constant regret bound under an additional assumption. We also prove an upper bound of order √TlogT when the parameter set is compact and convex. Each bound matches (up to a logarithmic factor) the existing lower bound of any pricing policy. In this way, we show that dithering policies achieve asymptotically optimal performance in three different parameter settings, which demonstrates dithering as a unified approach to strike the balance between exploration and exploitation. |
format |
text |
author |
HUH, Woonghee Tim KIM, Michael Jong LIN, Meichun |
author_facet |
HUH, Woonghee Tim KIM, Michael Jong LIN, Meichun |
author_sort |
HUH, Woonghee Tim |
title |
Bayesian dithering for learning: Asymptotically optimal policies in dynamic pricing |
title_short |
Bayesian dithering for learning: Asymptotically optimal policies in dynamic pricing |
title_full |
Bayesian dithering for learning: Asymptotically optimal policies in dynamic pricing |
title_fullStr |
Bayesian dithering for learning: Asymptotically optimal policies in dynamic pricing |
title_full_unstemmed |
Bayesian dithering for learning: Asymptotically optimal policies in dynamic pricing |
title_sort |
bayesian dithering for learning: asymptotically optimal policies in dynamic pricing |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2022 |
url |
https://ink.library.smu.edu.sg/lkcsb_research/7312 |
_version_ |
1781793965509443584 |