Are vision language models multimodal learners?

Since the release of accessible vision language models (VLMs) such as GPT-4V and Gemini Pro in 2023, scholars have envisaged utilizing these artificial intelligence (AI) models to widely support instructors and learners. Particularly, their capability to simultaneously process visual and textual dat...

Full description

Saved in:
Bibliographic Details
Main Author: Lee, Gyeonggeon
Other Authors: School of Mechanical and Aerospace Engineering
Format: Conference or Workshop Item
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/181109
https://www.ntu.edu.sg/mae/ai-education-singapore-2024/activities/keynote-invited-talk#Content_C021_Col00
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-181109
record_format dspace
spelling sg-ntu-dr.10356-1811092024-11-14T08:54:32Z Are vision language models multimodal learners? Lee, Gyeonggeon School of Mechanical and Aerospace Engineering AI for Education Singapore 2024 NVIDIA Computer and Information Science Artificial intelligence Education Since the release of accessible vision language models (VLMs) such as GPT-4V and Gemini Pro in 2023, scholars have envisaged utilizing these artificial intelligence (AI) models to widely support instructors and learners. Particularly, their capability to simultaneously process visual and textual data and yield subsequent information is considered one of the most important features of these user-friendly VLMs. This capability is significant as human cognition benefits from multimodality, which has called for teaching, learning, and evaluation to be conducted in more diverse, sophisticated, and constructive ways. However, these multimodal educational practices are yet to be realized in everyday classrooms, while the integration of AI promises to facilitate this transformation. In this talk, we will review the hypothesized parallelism between humans and VLMs as multimodal learners and its implications for the potential role of AI models in future education. Additionally, we will discuss the limitations, challenges, and possible remedies to effectively integrate these models into educational settings. 2024-11-14T08:36:29Z 2024-11-14T08:36:29Z 2024 Conference Paper Lee, G. (2024). Are vision language models multimodal learners?. AI for Education Singapore 2024. Nanyang Technological University. https://hdl.handle.net/10356/181109 https://www.ntu.edu.sg/mae/ai-education-singapore-2024/activities/keynote-invited-talk#Content_C021_Col00 en © 2024 The Author. Published by Nanyang Technological University. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Artificial intelligence
Education
spellingShingle Computer and Information Science
Artificial intelligence
Education
Lee, Gyeonggeon
Are vision language models multimodal learners?
description Since the release of accessible vision language models (VLMs) such as GPT-4V and Gemini Pro in 2023, scholars have envisaged utilizing these artificial intelligence (AI) models to widely support instructors and learners. Particularly, their capability to simultaneously process visual and textual data and yield subsequent information is considered one of the most important features of these user-friendly VLMs. This capability is significant as human cognition benefits from multimodality, which has called for teaching, learning, and evaluation to be conducted in more diverse, sophisticated, and constructive ways. However, these multimodal educational practices are yet to be realized in everyday classrooms, while the integration of AI promises to facilitate this transformation. In this talk, we will review the hypothesized parallelism between humans and VLMs as multimodal learners and its implications for the potential role of AI models in future education. Additionally, we will discuss the limitations, challenges, and possible remedies to effectively integrate these models into educational settings.
author2 School of Mechanical and Aerospace Engineering
author_facet School of Mechanical and Aerospace Engineering
Lee, Gyeonggeon
format Conference or Workshop Item
author Lee, Gyeonggeon
author_sort Lee, Gyeonggeon
title Are vision language models multimodal learners?
title_short Are vision language models multimodal learners?
title_full Are vision language models multimodal learners?
title_fullStr Are vision language models multimodal learners?
title_full_unstemmed Are vision language models multimodal learners?
title_sort are vision language models multimodal learners?
publishDate 2024
url https://hdl.handle.net/10356/181109
https://www.ntu.edu.sg/mae/ai-education-singapore-2024/activities/keynote-invited-talk#Content_C021_Col00
_version_ 1816858990144913408