Neural logic vision language explainer

If we compare how humans reason and how deep models reason, humans reason in a symbolic manner with a formal language called logic, while most deep models reason in black-box. A natural question to ask is “Do the trained deep models reason similar as humans?” or “Can we explain the reasoning of...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang, Xiaofeng, Liu, Fayao, Lin, Guosheng
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172228
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:If we compare how humans reason and how deep models reason, humans reason in a symbolic manner with a formal language called logic, while most deep models reason in black-box. A natural question to ask is “Do the trained deep models reason similar as humans?” or “Can we explain the reasoning of deep models in the language of logic?” . In this work, we present NeurLogX to explain the reasoning process of deep vision language models in the language of logic. Given a trained vision language model, our method starts by generating reasoning facts through augmenting the input data. We then develop a differentiable inductive logic programming framework to learn interpretable logic rules from the facts. We show our results on various popular vision language models. Interestingly, we observe that almost all of the tested models can reason logically.