Probabilistic approach to constituent structure induction for Filipino

Grammar formalisms and parse systems are functional resources to high-level natural language processing applications. Filipino does not have extensive computational representation of the languages grammar and lacks a broad parsing mechanism. Computational approaches to automatic grammar development...

Full description

Saved in:
Bibliographic Details
Main Author: Alcantara, Danniel L.
Format: text
Language:English
Published: Animo Repository 2008
Subjects:
Online Access:https://animorepository.dlsu.edu.ph/etd_masteral/3701
https://animorepository.dlsu.edu.ph/context/etd_masteral/article/10539/viewcontent/CDTG004414_P.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
Language: English
Description
Summary:Grammar formalisms and parse systems are functional resources to high-level natural language processing applications. Filipino does not have extensive computational representation of the languages grammar and lacks a broad parsing mechanism. Computational approaches to automatic grammar development fall under either supervised and unsupervised categories. Supervised methods have produced better results, but require Tree banks or bracketed corpora as input. However, there are currently no computational resources available for Filipino to satisfy the input requirements of supervised generation. Unsupervised approaches make use of statistical and probabilistic data in order to estimate the structure of the input language. This research develops an unsupervised grammar induction system for the Filipino Language, focusing on the constituent structure. Three models are presented to handle the distribution and substitutability of constituents. The models were evaluated using 1264 sentences of length 1-10. Experimentation done on the Selection Model showed that the occurrence of a sequence is the most effective measurement for identifying constituency. The free word order phenomenon of the Filipino language was highlighted by the substitutable constituents learned by the Greedy Merge Model. The Constituent Context Model, which produced the highest ratings of the three, achieved values of 66.8% precision, 72.6% recall, and 69.5% overall measure. The produced results are comparable to existing unsupervised parse induction systems, despite the fact that the training corpus used is a fraction of the size applied by existing works. The models can not handle the dependency between words and phrases properly, and it is recommended to address dependency to further improve performance.