DeepMaxSAT : encode logical representation into deep learning models for information extraction
Information extraction (IE) is a task that generates structured information from given texts. Although deep learning has achieved significant success in information extraction, most deep learning models are black boxes, thus lack the capability of encoding domain knowledge and modeling complex relat...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/139058 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Information extraction (IE) is a task that generates structured information from given texts. Although deep learning has achieved significant success in information extraction, most deep learning models are black boxes, thus lack the capability of encoding domain knowledge and modeling complex relationships. To increase learning efficiency, one possible constraint to be integrated into the model is the Maximum Satis ability (MAX-SAT) problem, which basically takes logic rules as a set of clauses and aims to nd truth assignments that minimize the sum of weights of unsatisfied clauses. To incorporate such logical representation capability to deep learning models, we propose to add a layer of MAX-SAT transformation on top of a deep neural network, which can be trained via end-to-end gradient descent. The integrated model is able to improve task performance under the constraint of logic rules, meanwhile, the weights of the logic rules are adaptable to the training data. |
---|