Refining learning models in grammatical inference

Grammatical inference is a branch of computational learning theory that attacks the problem of learning grammatical models from string samples. In other words, grammatical inference tries to identify the computational models that generate the sample strings. In recent decade, due to the explosion of...

Full description

Saved in:
Bibliographic Details
Main Author: Wang, Xiangrui
Other Authors: Narendra Shivaji Chaudhari
Format: Theses and Dissertations
Language:English
Published: 2008
Subjects:
Online Access:https://hdl.handle.net/10356/13589
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Grammatical inference is a branch of computational learning theory that attacks the problem of learning grammatical models from string samples. In other words, grammatical inference tries to identify the computational models that generate the sample strings. In recent decade, due to the explosion of information, efficient ways of searching and organizing are of great importance. Therefore, using grammatical inference methods to process information computationally is very interesting. In real applications, the space-cost plays an important role on performance. In this thesis, we address the problem of refining learning models in grammatical inference. For regular language learning, we introduce formally the notion of “Classification Automaton” that reduces model size by identifying one automaton for multiple string classes. Classification automaton is proved to reduce 30% model size from a straightforward multiple automata approach on house rent data obtained from the public folder in Microsoft Exchange Server of Nanyang Technological University. In real world applications, there is always a maximum possible length for the strings. Based on this observation, we further introduce cover automata, which simplified a learning model with a maximum length limit, for grammatical inference. Test results based on Splice-junction Gene Sequence database demonstrate the method reduces model size by 32% from the widely used deterministic finite automaton model. By mixed k-th order Markov Chains, stochastic information for all possible substrings within k+1 length is captured. However, the space cost is exponential. We introduce the use of recurrent neural networks (RNNs) and present a pruning learning method to avoid the exponential space costs. There is a tradeoff between the accuracy and model size. We proposed a balancing method and presented test results based on Splicejunction Gene Sequence database, which demonstrate that the method reduced the model size by 105 times with a reduced accuracy from 80% to 76%. Then, we introduce profile-based alignment learning (PBAL) framework to refine the model for context-free grammar learning. The PBAL framework is an extension of the existing Alignment Based Learning (ABL) framework by making use of statistical information to refine alignment and further refine the learned grammatical rules. The converged results rules are proved in experiments to improve the alignment precision from 50% to 90% with model size reduced to 47%.