GPU-based commonsense reasoning for real-time query answering and multimodal analysis

A commonsense knowledge base is a set of facts containing the information possessed by an ordinary person. A commonsense knowledge base is also called a fundamental ontology, as it consists of very general concepts across all domains. In order to represent such a database in practice, different appr...

Full description

Saved in:
Bibliographic Details
Main Author: Tran, Ha Nguyen
Other Authors: Erik Cambria
Format: Theses and Dissertations
Language:English
Published: 2017
Subjects:
Online Access:http://hdl.handle.net/10356/72092
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:A commonsense knowledge base is a set of facts containing the information possessed by an ordinary person. A commonsense knowledge base is also called a fundamental ontology, as it consists of very general concepts across all domains. In order to represent such a database in practice, different approaches have been proposed in recent years. Most of them fall into either graph-based or rule-based knowledge representations. Reasoning and querying information on such kind of representations present two major implementation issues: performance and scalability, due to the fact that many new concepts (mined from the Web or learned through crowd-sourcing) are continuously integrated into the knowledge base. Some distributed computing based methods have recently been introduced to deal with those very large networks by utilizing parallelism, yet there remains the open problem of high communication costs between the participating machines. In recent years, Graphics Processing Units (GPUs) have become popular computing devices owing to their massive parallel execution power. A typical GPU device consists of hundreds of cores running simultaneously. Modern General Purpose GPUs have been successfully adopted to accelerate heavy workload tasks such as relational database joining operations, fundamental large-scale graph algorithms, and big data analytics. Encouraged by those promising results, the dissertation investigates whether and how GPUs can be leveraged to accelerate the performance of commonsense reasoning and query answering systems on large-scale networks. Firstly, to address the problem of reasoning and querying on large-scale graph-based commonsense knowledge bases, the thesis presents a GPU-friendly method, called GpSense, to solve the subgraph matching problem which is the core function of commonsense reasoning and query answering systems. Our approach is based on a novel filtering-and-joining strategy which is suitable to be implemented on massively parallel architectures. In order to optimize the performance in depth, we utilize a series of optimization techniques which contribute towards increasing GPU occupancy, reducing workload imbalances and in particular speeding up subgraph matching on commonsense graphs. To address the issue of large graphs which cannot fit into the GPU memory, we propose a multiple-level graph compression technique to reduce graph sizes while preserving all subgraph matching results. The graph compression method converts the data graph to a weighted graph which is small enough to be maintained in GPU memory. To highlight the efficiency of our solution, we perform an extensive evaluation of GpSense against state-of-the-art subgraph matching algorithms. Extensive experimental evaluations on both real and synthetic data show that our implementation scales in a linear way and outperforms current optimized CPU-based competitors. Secondly, in order to reason and retrieve information on rule-based knowledge bases, the thesis introduces gSparql, a fast and scalable inference and querying method on mass-storage RDF data with rule-based entailment regimes. Our approach accepts different rulesets and executes the reasoning process at query time when the inferred triples are determined by the set of triple patterns defined in the query. To answer SPARQL queries in parallel, we first present a query rewriting algorithm to extend the queries and also eliminate redundant triple patterns based on the rulesets. Then, we convert the execution plan into a series of primitives such as sort, merge, prefix scan, and compaction which can be efficiently done on GPU devices. To overcome the problem of triple duplication, we utilize a combination of Bloom Filter and sort-merge algorithms on the GPU. Experiment results on the LUBM dataset show that our solution outperforms the state-of-the-art Jena method on the large datasets. Finally, we utilize commonsense knowledge bases to address the problem of real-time multimodal analysis. In particular, we focus on the problem of multimodal sentiment analysis, which consists in the simultaneous analysis of different modalities, e.g., speech and video, for emotion and polarity detection. Our approach takes advantage of the massively parallel processing power of modern GPUs to enhance the performance of feature extraction from different modalities. In addition, in order to extract important textual features from multimodal sources, we generate domain-specific graphs based on commonsense knowledge and apply GPU-based graph traversal for fast feature detection. Then, powerful ELM classifiers are applied to build the sentiment analysis model based on the extracted features. We conduct our experiments on the YouTube dataset and achieve an accuracy of 78% which outperforms all previous systems. In term of processing speed, our method shows improvements of several orders of magnitude for feature extraction compared to CPU-based counterparts.