Handling non-stationary data streams under complex environments

In the digital era, where data generation is incessant and often presents non-stationary distributions, intelligent agents face the imperative challenge of emulating human-like learning and adaptation. Handling non-stationary data streams effectively is essential for intelligent agents, enabling th...

Full description

Saved in:
Bibliographic Details
Main Author: Weng, Weiwei
Other Authors: Zhang Jie
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/178601
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In the digital era, where data generation is incessant and often presents non-stationary distributions, intelligent agents face the imperative challenge of emulating human-like learning and adaptation. Handling non-stationary data streams effectively is essential for intelligent agents, enabling them to adapt and make accurate predictions in dynamic environments. Among the diverse research domains in machine learning, continual learning emerges as a crucial paradigm, enabling networks to accumulate knowledge over sequential tasks without retraining from scratch in ever-evolving data environments. A paramount challenge in continual learning is catastrophic forgetting, characterized by the performance degradation of neural networks on previously acquired tasks when subsequently trained on a new task. This problem stems from the stability-plasticity dilemma. The stability–plasticity dilemma depicts a spectrum in artificial neural networks. While stability focuses on retaining learned knowledge, plasticity is essential for adapting to new data distributions. Furthermore, handling concept drifts and label scarcity in never-ending data streams is vital for effective learning in real-world scenarios, where true class labels may be delayed or unavailable. The former may render a learning agent obsolete due to shifting parameters, while the latter is prevalent in real-world scenarios where true class labels are delayed or unavailable. Traditional deep learning paradigms, anchored in offline learning, necessitate retraining the network from scratch to accommodate new information. Such approach is not only computationally and memory-intensive but also poses significant privacy concerns. In non-stationary environments, continual learning models must also focus on resource efficiency, ideally updating the network with only new training instances. These challenges are common in the broader context of non-stationary data streams. Therefore, addressing the pivotal problems in continual learning is a crucial step toward effectively handling non-stationary environments. This thesis endeavors to develop advanced algorithms adept at managing complex environments in the continual learning manner. Four pivotal contributions are presented, each addressing specific aspects of non-stationary data stream processing. These include both online learning techniques and continual learning approaches to cover a broader range of challenges in non-stationary environments. Firstly, an innovative online learning technique, Parsimonious Network++ (ParsNet++), is proposed to tackle the online quality monitoring under label scarcity. Utilizing limited labeled samples, ParsNet++ significantly reduces manual labor in product quality inspection, embodying a data-driven, weakly-supervised approach. For effective adjustment} to environmental changes, Autonomous Clustering Mechanism (ACM), a flexible density estimation method, is adopted for constructing complex probability densities that steer the structural learning process within its dynamic hidden layer. Secondly, we study the challenge of cross-domain multistream classification under extreme label scarcity in the source domain and the absence of labeled target domain data. Learning Streaming Process from Partial Ground Truth} (LEOPARD) is proposed, which diverges from traditional static transfer learning settings and is specifically tailored for streaming data. LEOPARD foster a clustering-friendly latent space via an adaptive structure and achieves cross-domain alignment through a domain-invariant network, together in response to asynchronous drifts and domain discrepancies. More importantly, it operates without the need for continuous label availability, relying only on a limited prerecorded labeled samples from the source stream to establish class-to-cluster relationships. Thirdly, we investigate continual learning problems. Different from previous tasks, continual learning aims to solve both forward and backward transfer. Its core lies in utilizing prior experiences to assimilate new knowledge over time, necessitating the capability to overcome catastrophic forgetting. Regularization approaches often overlook the mapping inter-task synaptic relevance, potentially harboring shared information across tasks within certain neurons. Furthermore, the importance matrix in conventional regularization approaches tends to explode along with the accumulation of tasks. ISYANA is introduced to address these problems via task-to-synapses and task-to-task modules, enhancing the importance matrix with a per-parameter learning rate implementation. Finally, Continual learning approach for many processes (CLAMP) is designed for efficient deployment in cross-domain multistream continual learning with unlabeled target domain. A distinctive feature of CLAMP is its reliance on sequence-aware assessors, which produce a set of weights for every sample. Dual assessors are trained in the meta-learning approach using random transformation techniques and similar samples of the source process to address the noisy pseudo-label problem. This approach not only controls the sample's influences addressing the issues of negative transfers and noisy pseudo labels but also the interactions of the multiple loss functions to achieve a proper tradeoff between the stability and the plasticity thus preventing catastrophic forgetting. Overall, CLAMP marks a significant stride in cross-domain continual learning, adeptly integrating adversarial domain adaptation to effectively address the challenges of label scarcity and domain shift.