Deep transfer learning on continual learning

Artificial intelligent agents acting in the real world interact with a multitude of data streams. As a result, they must attain, accumulate and record various tasks from non-stationary data distributions. In addition, self-governing computational agents must acquire an understanding of new experienc...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Sousa Leite de Carvalho, Marcus Vinicius
مؤلفون آخرون: Zhang Jie
التنسيق: Thesis-Doctor of Philosophy
اللغة:English
منشور في: Nanyang Technological University 2023
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/171082
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
id sg-ntu-dr.10356-171082
record_format dspace
spelling sg-ntu-dr.10356-1710822023-11-02T02:20:48Z Deep transfer learning on continual learning Sousa Leite de Carvalho, Marcus Vinicius Zhang Jie School of Computer Science and Engineering ZhangJ@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Artificial intelligent agents acting in the real world interact with a multitude of data streams. As a result, they must attain, accumulate and record various tasks from non-stationary data distributions. In addition, self-governing computational agents must acquire an understanding of new experiences and transfer knowledge from prior learning over a long period. Continual learning or lifelong learning refers to the capacity to continuously acquire new knowledge, adapt to changing circumstances, and integrate new experiences with previously learned ones over an extended span of time. Catastrophic forgetting is the primary obstacle in computational models that use continual learning. It refers to the neural networks' tendency to disrupt the existing learned knowledge while being trained on new information. This leads to a sudden decline in performance when the new information overwrites the previous knowledge entirely or partially. The learning agents should assimilate the new information seamlessly to enhance their existing knowledge and prevent catastrophic forgetfulness. The model must be capable of preserving most or all of the acquired knowledge, ensuring that the new information does not hinder the previously acquired knowledge. Computational models that engage in continuous learning are often designed to mimic the learning abilities of humans and other mammals. These animals can acquire, distill, and communicate knowledge over long periods of time. They benefit from various neurophysiological processes that facilitate the development of perception and motor skills through experience. Unlike machines, humans and other mammals can effortlessly acquire new skills and transfer knowledge between different domains and tasks. Moreover, the human brain has a remarkable ability to integrate multisensory information, allowing it to respond effectively in situations where there is sensory ambiguity and to draw on knowledge from different domains to achieve a common objective. Consequently, agents that engage in continual learning in the real world need to be able to deal with uncertainty, process a continuous stream of multisensory data, and learn multiple tasks without disrupting their previously acquired knowledge. Achieving these goals has proven to be a persistent challenge for machine learning, neural network research, and the development of general intelligent systems. Doctor of Philosophy 2023-10-12T02:46:37Z 2023-10-12T02:46:37Z 2023 Thesis-Doctor of Philosophy Sousa Leite de Carvalho, M. V. (2023). Deep transfer learning on continual learning. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/171082 https://hdl.handle.net/10356/171082 10.32657/10356/171082 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Sousa Leite de Carvalho, Marcus Vinicius
Deep transfer learning on continual learning
description Artificial intelligent agents acting in the real world interact with a multitude of data streams. As a result, they must attain, accumulate and record various tasks from non-stationary data distributions. In addition, self-governing computational agents must acquire an understanding of new experiences and transfer knowledge from prior learning over a long period. Continual learning or lifelong learning refers to the capacity to continuously acquire new knowledge, adapt to changing circumstances, and integrate new experiences with previously learned ones over an extended span of time. Catastrophic forgetting is the primary obstacle in computational models that use continual learning. It refers to the neural networks' tendency to disrupt the existing learned knowledge while being trained on new information. This leads to a sudden decline in performance when the new information overwrites the previous knowledge entirely or partially. The learning agents should assimilate the new information seamlessly to enhance their existing knowledge and prevent catastrophic forgetfulness. The model must be capable of preserving most or all of the acquired knowledge, ensuring that the new information does not hinder the previously acquired knowledge. Computational models that engage in continuous learning are often designed to mimic the learning abilities of humans and other mammals. These animals can acquire, distill, and communicate knowledge over long periods of time. They benefit from various neurophysiological processes that facilitate the development of perception and motor skills through experience. Unlike machines, humans and other mammals can effortlessly acquire new skills and transfer knowledge between different domains and tasks. Moreover, the human brain has a remarkable ability to integrate multisensory information, allowing it to respond effectively in situations where there is sensory ambiguity and to draw on knowledge from different domains to achieve a common objective. Consequently, agents that engage in continual learning in the real world need to be able to deal with uncertainty, process a continuous stream of multisensory data, and learn multiple tasks without disrupting their previously acquired knowledge. Achieving these goals has proven to be a persistent challenge for machine learning, neural network research, and the development of general intelligent systems.
author2 Zhang Jie
author_facet Zhang Jie
Sousa Leite de Carvalho, Marcus Vinicius
format Thesis-Doctor of Philosophy
author Sousa Leite de Carvalho, Marcus Vinicius
author_sort Sousa Leite de Carvalho, Marcus Vinicius
title Deep transfer learning on continual learning
title_short Deep transfer learning on continual learning
title_full Deep transfer learning on continual learning
title_fullStr Deep transfer learning on continual learning
title_full_unstemmed Deep transfer learning on continual learning
title_sort deep transfer learning on continual learning
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/171082
_version_ 1781793677999341568