Content-filtering AI systems–limitations, challenges and regulatory approaches

Online service providers, and even governments, have increasingly relied on Artificial Intelligence (‘AI’) to regulate content on the internet. In some jurisdictions, the law has incentivised, if not obligated, service providers to adopt measures to detect, track, and remove objectionable content su...

Full description

Saved in:
Bibliographic Details
Main Authors: Marsoof, Althaf, Luco, Andrés, Tan, Harry, Joty, Shafiq
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170526
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-170526
record_format dspace
spelling sg-ntu-dr.10356-1705262023-09-18T06:44:21Z Content-filtering AI systems–limitations, challenges and regulatory approaches Marsoof, Althaf Luco, Andrés Tan, Harry Joty, Shafiq School of Computer Science and Engineering Nanyang Business School School of Humanities Business::Law Engineering::Computer science and engineering Content Moderation AI and Automation Online service providers, and even governments, have increasingly relied on Artificial Intelligence (‘AI’) to regulate content on the internet. In some jurisdictions, the law has incentivised, if not obligated, service providers to adopt measures to detect, track, and remove objectionable content such as terrorist propaganda. Consequently, service providers are being pushed to use AI to moderate online content. However, content-filtering AI systems are subject to limitations that affect their accuracy and transparency. These limitations open the possibility for legitimate content to be removed and objectionable content to remain online. Such an outcome could endanger human well-being and the exercise of our human rights. In view of these challenges, we argue that the design and use of content-filtering AI systems should be regulated. AI ethics principles such as transparency, explainability, fairness, and human-centricity should guide such regulatory efforts. Nanyang Technological University We thank Micron Technology and the NTU Institute of Science and Technology for Humanity, an interdisciplinary research institute at Singapore’s Nanyang Technological University (‘NTU’), for funding the research that underpins this paper. 2023-09-18T05:50:57Z 2023-09-18T05:50:57Z 2023 Journal Article Marsoof, A., Luco, A., Tan, H. & Joty, S. (2023). Content-filtering AI systems–limitations, challenges and regulatory approaches. Information and Communications Technology Law, 32(1), 64-101. https://dx.doi.org/10.1080/13600834.2022.2078395 1360-0834 https://hdl.handle.net/10356/170526 10.1080/13600834.2022.2078395 2-s2.0-85130732105 1 32 64 101 en Information and Communications Technology Law © 2022 Informa UK Limited, trading as Taylor & Francis Group. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Business::Law
Engineering::Computer science and engineering
Content Moderation
AI and Automation
spellingShingle Business::Law
Engineering::Computer science and engineering
Content Moderation
AI and Automation
Marsoof, Althaf
Luco, Andrés
Tan, Harry
Joty, Shafiq
Content-filtering AI systems–limitations, challenges and regulatory approaches
description Online service providers, and even governments, have increasingly relied on Artificial Intelligence (‘AI’) to regulate content on the internet. In some jurisdictions, the law has incentivised, if not obligated, service providers to adopt measures to detect, track, and remove objectionable content such as terrorist propaganda. Consequently, service providers are being pushed to use AI to moderate online content. However, content-filtering AI systems are subject to limitations that affect their accuracy and transparency. These limitations open the possibility for legitimate content to be removed and objectionable content to remain online. Such an outcome could endanger human well-being and the exercise of our human rights. In view of these challenges, we argue that the design and use of content-filtering AI systems should be regulated. AI ethics principles such as transparency, explainability, fairness, and human-centricity should guide such regulatory efforts.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Marsoof, Althaf
Luco, Andrés
Tan, Harry
Joty, Shafiq
format Article
author Marsoof, Althaf
Luco, Andrés
Tan, Harry
Joty, Shafiq
author_sort Marsoof, Althaf
title Content-filtering AI systems–limitations, challenges and regulatory approaches
title_short Content-filtering AI systems–limitations, challenges and regulatory approaches
title_full Content-filtering AI systems–limitations, challenges and regulatory approaches
title_fullStr Content-filtering AI systems–limitations, challenges and regulatory approaches
title_full_unstemmed Content-filtering AI systems–limitations, challenges and regulatory approaches
title_sort content-filtering ai systems–limitations, challenges and regulatory approaches
publishDate 2023
url https://hdl.handle.net/10356/170526
_version_ 1779156363004346368