Scalable distributional robustness in a class of non convex optimization with guarantees

Distributionally robust optimization (DRO) has shown lot of promise in providing robustness in learning as well as sample based optimization problems. We endeavor to provide DRO solutions for a class of sum of fractionals, non-convex optimization which is used for decision making in prominent areas...

Full description

Saved in:
Bibliographic Details
Main Authors: BOSE, Avinandan, SINHA, Arunesh, MAI, Tien
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7444
https://ink.library.smu.edu.sg/context/sis_research/article/8447/viewcontent/DRO_final.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8447
record_format dspace
spelling sg-smu-ink.sis_research-84472022-10-20T07:37:58Z Scalable distributional robustness in a class of non convex optimization with guarantees BOSE, Avinandan SINHA, Arunesh MAI, Tien Distributionally robust optimization (DRO) has shown lot of promise in providing robustness in learning as well as sample based optimization problems. We endeavor to provide DRO solutions for a class of sum of fractionals, non-convex optimization which is used for decision making in prominent areas such as facility location and security games. In contrast to previous work, we find it more tractable to optimize the equivalent variance regularized form of DRO rather than the minimax form. We transform the variance regularized form to a mixed-integer second order cone program (MISOCP), which, while guaranteeing near global optimality, does not scale enough to solve problems with real world data-sets. We further propose two abstraction approaches based on clustering and stratified sampling to increase scalability, which we then use for real world data-sets. Importantly, we provide near global optimality guarantees for our approach and show experimentally that our solution quality is better than the locally optimal ones achieved by state-of-the-art gradient-based methods. We experimentally compare our different approaches andbaselines, and reveal nuanced properties of a DRO solution. 2022-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7444 https://ink.library.smu.edu.sg/context/sis_research/article/8447/viewcontent/DRO_final.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Fractional distributionally robustness mixed-integer second order cone Artificial Intelligence and Robotics Systems Architecture
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Fractional
distributionally robustness
mixed-integer second order cone
Artificial Intelligence and Robotics
Systems Architecture
spellingShingle Fractional
distributionally robustness
mixed-integer second order cone
Artificial Intelligence and Robotics
Systems Architecture
BOSE, Avinandan
SINHA, Arunesh
MAI, Tien
Scalable distributional robustness in a class of non convex optimization with guarantees
description Distributionally robust optimization (DRO) has shown lot of promise in providing robustness in learning as well as sample based optimization problems. We endeavor to provide DRO solutions for a class of sum of fractionals, non-convex optimization which is used for decision making in prominent areas such as facility location and security games. In contrast to previous work, we find it more tractable to optimize the equivalent variance regularized form of DRO rather than the minimax form. We transform the variance regularized form to a mixed-integer second order cone program (MISOCP), which, while guaranteeing near global optimality, does not scale enough to solve problems with real world data-sets. We further propose two abstraction approaches based on clustering and stratified sampling to increase scalability, which we then use for real world data-sets. Importantly, we provide near global optimality guarantees for our approach and show experimentally that our solution quality is better than the locally optimal ones achieved by state-of-the-art gradient-based methods. We experimentally compare our different approaches andbaselines, and reveal nuanced properties of a DRO solution.
format text
author BOSE, Avinandan
SINHA, Arunesh
MAI, Tien
author_facet BOSE, Avinandan
SINHA, Arunesh
MAI, Tien
author_sort BOSE, Avinandan
title Scalable distributional robustness in a class of non convex optimization with guarantees
title_short Scalable distributional robustness in a class of non convex optimization with guarantees
title_full Scalable distributional robustness in a class of non convex optimization with guarantees
title_fullStr Scalable distributional robustness in a class of non convex optimization with guarantees
title_full_unstemmed Scalable distributional robustness in a class of non convex optimization with guarantees
title_sort scalable distributional robustness in a class of non convex optimization with guarantees
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7444
https://ink.library.smu.edu.sg/context/sis_research/article/8447/viewcontent/DRO_final.pdf
_version_ 1770576340294041600