Comparing humans to automation in rating photographic aesthetics

Computer vision researchers have recently developed automated methods for rating the aesthetic appeal of a photograph. Machine learning techniques, applied to large databases of photos, mimic with reasonably good accuracy the mean ratings of online viewers. However, owing to the many factors underly...

Full description

Saved in:
Bibliographic Details
Main Authors: Kakarala, Ramakrishna, Agrawal, Abhishek, Morales, Sandino
Other Authors: Lin, Qian
Format: Conference or Workshop Item
Language:English
Published: 2018
Subjects:
Online Access:https://hdl.handle.net/10356/88344
http://hdl.handle.net/10220/46915
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-88344
record_format dspace
spelling sg-ntu-dr.10356-883442020-03-07T11:48:46Z Comparing humans to automation in rating photographic aesthetics Kakarala, Ramakrishna Agrawal, Abhishek Morales, Sandino Lin, Qian Allebach, Jan P. Fan, Zhigang School of Computer Science and Engineering Proceedings of SPIE - Imaging and Multimedia Analytics in a Web and Mobile World 2015 Photography DRNTU::Engineering::Computer science and engineering Aesthetics Computer vision researchers have recently developed automated methods for rating the aesthetic appeal of a photograph. Machine learning techniques, applied to large databases of photos, mimic with reasonably good accuracy the mean ratings of online viewers. However, owing to the many factors underlying aesthetics, it is likely that such techniques for rating photos do not generalize well beyond the data on which they are trained. This paper reviews recent attempts to compare human ratings, obtained in a controlled setting, to ratings provided by machine learning techniques. We review methods to obtain meaningful ratings both from selected groups of judges and also from crowd sourcing. We find that state-of-the-art techniques for automatic aesthetic evaluation are only weakly correlated with human ratings. This shows the importance of obtaining data used for training automated systems under carefully controlled conditions. MOE (Min. of Education, S’pore) Published version 2018-12-11T09:16:19Z 2019-12-06T17:01:10Z 2018-12-11T09:16:19Z 2019-12-06T17:01:10Z 2015 Conference Paper Kakarala, R., Agrawal, A., & Morales, S. (2015). Comparing humans to automation in rating photographic aesthetics. Proceedings of SPIE - Imaging and Multimedia Analytics in a Web and Mobile World 2015, 9408, 94080C-. doi:10.1117/12.2084991 https://hdl.handle.net/10356/88344 http://hdl.handle.net/10220/46915 10.1117/12.2084991 en © 2015 Society of Photo-optical Instrumentation Engineers (SPIE). This paper was published in Proceedings of SPIE - Imaging and Multimedia Analytics in a Web and Mobile World 2015 and is made available as an electronic reprint (preprint) with permission of Society of Photo-optical Instrumentation Engineers (SPIE). The published version is available at: [http://dx.doi.org/10.1117/12.2084991]. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper is prohibited and is subject to penalties under law. 10 p. application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Photography
DRNTU::Engineering::Computer science and engineering
Aesthetics
spellingShingle Photography
DRNTU::Engineering::Computer science and engineering
Aesthetics
Kakarala, Ramakrishna
Agrawal, Abhishek
Morales, Sandino
Comparing humans to automation in rating photographic aesthetics
description Computer vision researchers have recently developed automated methods for rating the aesthetic appeal of a photograph. Machine learning techniques, applied to large databases of photos, mimic with reasonably good accuracy the mean ratings of online viewers. However, owing to the many factors underlying aesthetics, it is likely that such techniques for rating photos do not generalize well beyond the data on which they are trained. This paper reviews recent attempts to compare human ratings, obtained in a controlled setting, to ratings provided by machine learning techniques. We review methods to obtain meaningful ratings both from selected groups of judges and also from crowd sourcing. We find that state-of-the-art techniques for automatic aesthetic evaluation are only weakly correlated with human ratings. This shows the importance of obtaining data used for training automated systems under carefully controlled conditions.
author2 Lin, Qian
author_facet Lin, Qian
Kakarala, Ramakrishna
Agrawal, Abhishek
Morales, Sandino
format Conference or Workshop Item
author Kakarala, Ramakrishna
Agrawal, Abhishek
Morales, Sandino
author_sort Kakarala, Ramakrishna
title Comparing humans to automation in rating photographic aesthetics
title_short Comparing humans to automation in rating photographic aesthetics
title_full Comparing humans to automation in rating photographic aesthetics
title_fullStr Comparing humans to automation in rating photographic aesthetics
title_full_unstemmed Comparing humans to automation in rating photographic aesthetics
title_sort comparing humans to automation in rating photographic aesthetics
publishDate 2018
url https://hdl.handle.net/10356/88344
http://hdl.handle.net/10220/46915
_version_ 1681041610835492864