From noise to information: discriminative tasks based on randomized neural networks and generative tasks based on diffusion models

In this thesis, I delve into the realm of noise and information, exploring the application and capabilities of randomized neural networks in discriminative tasks, as well as the utilization of diffusion models in generative tasks. I begin by investigating the inherent randomness in neural networks,...

Full description

Saved in:
Bibliographic Details
Main Author: Hu, Minghui
Other Authors: Arokiaswami Alphones
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/177388
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In this thesis, I delve into the realm of noise and information, exploring the application and capabilities of randomized neural networks in discriminative tasks, as well as the utilization of diffusion models in generative tasks. I begin by investigating the inherent randomness in neural networks, and how this randomness can be harnessed to perform discriminative tasks with high accuracy. I then transition to the domain of generative tasks, where I employ diffusion models to generate high-quality data from noise. The primary innovations include: 1. Part I: Randomized Neural Networks for Discriminative Tasks - The introduction of an unsupervised learning approach and self-distillation for Randomised neural networks, which has improved the efficiency of utilizing limited data resources and has also enhanced the overall capabilities of the model. - The development of an ensemble deep RVFL network for regression tasks, incorporating techniques such as boosting factors, skip connections, and an ensemble scheme for improved predictive power. - More structures for Randomised Neural Networks, including Automated Layer-wise Solution, Adaptive Ensemble, Deep Reservoir variants and Noise Elimination. Our broader objective is to extend the utility of RVFL networks across various domains and applications. 2. Part II: Diffusion Models for Generative Tasks - The design of a Vector Quantized Discrete Diffusion Model (VQ-DDM) for efficient and high-fidelity image generation, which employs a two-stage process involving discrete VAE and diffusion model for latent code distribution fitting. - The introduction of a Unified Discrete Diffusion model (UniD3) for simultaneous vision-language generation, which constructs a joint probability distribution by mixing discrete image and text tokens. - The proposal of methods to improve the controllability of text-conditional diffusion models, including a Generalized ControlNet for multi-modal input and a plug-and-play module for low frequence control. The findings of this thesis demonstrate the potential of randomised neural networks and diffusion models in handling complex machine learning tasks, offering new insights into the interplay between noise and information. The innovative approaches presented in this thesis open up new research directions and have the potential to significantly impact the field of machine learning.