Toward quantifying ambiguities in artistic images

Xi Wang, Zoya Bylinskii, Aaron Hertzmann, Robert Pepperell

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

It has long been hypothesized that perceptual ambiguities play an important role in aesthetic experience: A work with some ambiguity engages a viewer more than one that does not. However, current frameworks for testing this theory are limited by the availability of stimuli and data collection methods. This article presents an approach to measuring the perceptual ambiguity of a collection of images. Crowdworkers are asked to describe image content, after different viewing durations. Experiments are performed using images created with Generative Adversarial Networks, using the Artbreeder website. We show that text processing of viewer responses can provide a fine-grained way to measure and describe image ambiguities.

Original languageEnglish
Article number13
JournalACM Transactions on Applied Perception
Volume17
Issue number4
DOIs
Publication statusPublished - 6 Nov 2020

Keywords

  • Aesthetics
  • Datasets
  • Generative adversarial networks (GAN)
  • Image descriptions
  • Text tagging

Cite this