Discovering hidden visual concepts beyond linguistic input in Infant learning
Infants develop complex visual understanding rapidly, even preceding of the acquisition of linguistic inputs. As computer vision seeks to replicate the hu- man vision system, understanding infant visual development may o”er valu- able insights. In this study, we present an interdisciplinary study...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182228 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-182228 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1822282025-01-17T15:47:53Z Discovering hidden visual concepts beyond linguistic input in Infant learning Ke, Xueyi Wen Bihan School of Electrical and Electronic Engineering bihan.wen@ntu.edu.sg Computer and Information Science Representation analysis Egocentric multimodal learning Infants develop complex visual understanding rapidly, even preceding of the acquisition of linguistic inputs. As computer vision seeks to replicate the hu- man vision system, understanding infant visual development may o”er valu- able insights. In this study, we present an interdisciplinary study exploring this question: can a computational model that imitates the infant learning process develop broader visual concepts that extend beyond the vocabulary it has heard, similar to how infants naturally learn? To investigate this, we analyze representation from a recently published model in Science by Vong et al.[1], which is trained on longitudinal, egocentric images of a single child paired with transcribed parental speech. We introduce a training-free framework that can discover and utilize visual concept neurons hidden in the model’s internal representations. Our findings show that these neurons can classify objects beyond its original vocabulary. Furthermore, we compare the visual representations in infant-like models with those in modern computer vision models, such as CLIP or ImageNet pre-trained model, highlighting key similarities and di”erences. Ultimately, our work bridges cognitive science and computer vision by analyzing the internal representations of a computational model trained sorely on an infant’s visual and linguistic inputs. Master's degree 2025-01-16T01:05:24Z 2025-01-16T01:05:24Z 2024 Thesis-Master by Coursework Ke, X. (2024). Discovering hidden visual concepts beyond linguistic input in Infant learning. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/182228 https://hdl.handle.net/10356/182228 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Representation analysis Egocentric multimodal learning |
spellingShingle |
Computer and Information Science Representation analysis Egocentric multimodal learning Ke, Xueyi Discovering hidden visual concepts beyond linguistic input in Infant learning |
description |
Infants develop complex visual understanding rapidly, even preceding of the
acquisition of linguistic inputs. As computer vision seeks to replicate the hu-
man vision system, understanding infant visual development may o”er valu-
able insights. In this study, we present an interdisciplinary study exploring
this question: can a computational model that imitates the infant learning
process develop broader visual concepts that extend beyond the vocabulary
it has heard, similar to how infants naturally learn? To investigate this,
we analyze representation from a recently published model in Science by
Vong et al.[1], which is trained on longitudinal, egocentric images of a single
child paired with transcribed parental speech. We introduce a training-free
framework that can discover and utilize visual concept neurons hidden in the
model’s internal representations. Our findings show that these neurons can
classify objects beyond its original vocabulary. Furthermore, we compare the
visual representations in infant-like models with those in modern computer
vision models, such as CLIP or ImageNet pre-trained model, highlighting key
similarities and di”erences. Ultimately, our work bridges cognitive science and
computer vision by analyzing the internal representations of a computational
model trained sorely on an infant’s visual and linguistic inputs. |
author2 |
Wen Bihan |
author_facet |
Wen Bihan Ke, Xueyi |
format |
Thesis-Master by Coursework |
author |
Ke, Xueyi |
author_sort |
Ke, Xueyi |
title |
Discovering hidden visual concepts beyond linguistic input in Infant learning |
title_short |
Discovering hidden visual concepts beyond linguistic input in Infant learning |
title_full |
Discovering hidden visual concepts beyond linguistic input in Infant learning |
title_fullStr |
Discovering hidden visual concepts beyond linguistic input in Infant learning |
title_full_unstemmed |
Discovering hidden visual concepts beyond linguistic input in Infant learning |
title_sort |
discovering hidden visual concepts beyond linguistic input in infant learning |
publisher |
Nanyang Technological University |
publishDate |
2025 |
url |
https://hdl.handle.net/10356/182228 |
_version_ |
1821833192788721664 |