Content-filtering AI systems–limitations, challenges and regulatory approaches

Online service providers, and even governments, have increasingly relied on Artificial Intelligence (‘AI’) to regulate content on the internet. In some jurisdictions, the law has incentivised, if not obligated, service providers to adopt measures to detect, track, and remove objectionable content su...

Full description

Saved in:
Bibliographic Details
Main Authors: Marsoof, Althaf, Luco, Andrés, Tan, Harry, Joty, Shafiq
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170526
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Online service providers, and even governments, have increasingly relied on Artificial Intelligence (‘AI’) to regulate content on the internet. In some jurisdictions, the law has incentivised, if not obligated, service providers to adopt measures to detect, track, and remove objectionable content such as terrorist propaganda. Consequently, service providers are being pushed to use AI to moderate online content. However, content-filtering AI systems are subject to limitations that affect their accuracy and transparency. These limitations open the possibility for legitimate content to be removed and objectionable content to remain online. Such an outcome could endanger human well-being and the exercise of our human rights. In view of these challenges, we argue that the design and use of content-filtering AI systems should be regulated. AI ethics principles such as transparency, explainability, fairness, and human-centricity should guide such regulatory efforts.