Smart interpretable model (SIM) enabling subject matter experts in rule generation
Current Artificial Intelligence (AI) technologies are widely regarded as black boxes, whose internal structures are not inherently transparent, even though they provide powerful prediction capabilities. Having a transparent model that enables users to understand its inner workings allows them to app...
Saved in:
Main Authors: | Christianto, Hotman, Lee, Gary Kee Khoon, Zhou, Jair Weigui, Kasim, Henry, Rajan, Deepu |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/163219 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
A Wrong Turn in History: Re-understanding the Exclusionary Rule Against Prior Negotiations in Contractual Interpretation
by: GOH, Yihan
Published: (2014) -
Greedy rule generation from discrete data and its use in neural network rule extraction
by: Odajima, K., et al.
Published: (2013) -
An interpretable neural fuzzy inference system for predictions of underpricing in initial public offerings
by: Qian, Xiaolin, et al.
Published: (2018) -
Leveraging the trade-off between accuracy and interpretability in a hybrid intelligent system
by: WANG, Di, et al.
Published: (2017) -
INTERPRETABILITY AND FAIRNESS IN MACHINE LEARNING: A FORMAL METHODS APPROACH
by: BISHWAMITTRA GHOSH
Published: (2023)