Deep reinforcement learning for intractable routing & inverse problems
Solving intractable problems with huge/infinite solution space is challenging and has motivated much research. Classical methods mainly focus on fast searching via either approximation or (meta)heuristics with the help of some regularizers. However, neither the solution quality nor inference time is...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/164058 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Solving intractable problems with huge/infinite solution space is challenging and has motivated much research. Classical methods mainly focus on fast searching via either approximation or (meta)heuristics with the help of some regularizers. However, neither the solution quality nor inference time is satisfying. Recently, a popular trend is to leverage deep learning to learn to solve intractable problems and much impressive progress has been achieved with good solution quality and fast inference. Among the learning-based ones, deep reinforcement learning (DRL) based ones show superiority, since they learn a more flexible policy with less supervision. Many exciting achievements can be found in board games, video games, robotics. However, most of the current methods are proposed for some specific tasks with practical settings neglected. To push DRL one step forward to real-life applications, we propose a paradigm that can learn to solve a wider range of intractable problems and attempt to provide an instruction and insight on how to systematically learn to solve more practical intractable problems via DRL. Following the proposed paradigm, we proposed four frameworks for four practical intractable problems, namely travelling salesman problem with time window and rejection (TSPTWR), multiple TSPTWR (mTSPTWR), robust image denoising and customized low-light image enhancement respectively. Particularly, different from the counterparts, where the deep neural network (DNN) is the main concern, in our paradigm, the modelling of Markov decision process (MDP), and the design of action and reward are also studied. By doing so, we are able to flexibly circumvent the complex design of DNN and make good use of existing DRL based methods to more practical problems. Extensive experiments show that our proposed frameworks can outperform both classical and learning-based baselines for these applications. The success of these four applications demonstrates that our proposed paradigm is a general and promising solution to solve intractable problems efficiently. In the end, we conclude this thesis and point out some interesting directions that could be followed as future work. |
---|