Assessing Aesthetics of Generated Abstract Images Using Correlation Structure

Assessing Aesthetics of Generated Abstract Images Using Correlation Structure

Assessing aesthetics of generated abstract images using correlation structure Sina Khajehabdollahi Georg Martius Anna Levina Eberhard Karls University of Max Planck Institute for Intelligent Systems Eberhard Karls University of Tubingen,¨ Germany Tubingen,¨ Germany Tubingen,¨ Germany Email: [email protected] Email: [email protected] Email: [email protected] Abstract—Can we generate abstract aesthetic images without bias from natural or human selected image corpi? Are aesthetic images singled out in their correlation functions? In this paper we give answers to these and more questions. We generate images using compositional pattern-producing networks with random weights and varying architecture. We demonstrate that even with the randomly selected weights the correlation functions remain largely determined by the network architecture. In a controlled experiment, human subjects picked aesthetic images out of a large dataset of all generated images. Statistical analysis reveals that the correlation function is indeed different for aesthetic images. Keywords—image generation; CPPN; neural network; image statistics; aesthetics; correlations I. INTRODUCTION Fig. 1. Illustration of the CPPN architecture. The network processes each pixel independently, here parametrized by the coordinates x; y and radius r. We all marvel at the aesthetic beauty of natural landscapes, We vary the number of layers, L, and the total number of neurons, N (all the statistics of these images shapes our understanding of blue), and the relative layer sizes. beauty. To which extent different statistical measures of the image affect the perception of beauty remains a big question. Photographs of the real world or figurative art mixes the interpretation of the objects and the narrative of the image Ideally, we need a way to generate abstract images without with one’s aesthetic perception. Investigating how generated anchoring them on predefined examples. abstract images are perceived by human observers allows one to uncover the underlying principles. Compositional Pattern-Producing Networks [5] (CPPNs) Natural images follow universal statistical characteristics are capable of generating a large variety of images without manifested by specific scaling in the power-spectrum and in being trained on a particular set of scenes. Images generated the two-point correlations [1]. However, even after two-point by specially trained/evolved CPPNs [6] were shown to be correlations are removed, natural images remain recognizable identified as natural objects both by humans as well as deep due to higher order relations [2]. Still, two-point correlations neural networks [7] albeit being of abstract nature. On the arXiv:2105.08635v1 [cs.CV] 18 May 2021 affect our perception both in natural and artificial images. other side, without training or evolutionary selection CPPNs Here we aim to understand, on the one side, to which extent simply transform and decorrelate highly correlated inputs. such correlations relate to aesthetic attractiveness of images Interestingly, these neural network with random weights create and on the other side, how specific features of artificial structured outputs that vary significantly among architectures neural networks can be used to generate specific statistics of and hyper-parameters. correlations. The automatic generation of images is typically designed to We investigate, how constraining the layout of CPPNs in mimic real scenes, e.g. by attempting to represent the probabil- terms of numbers of layers and number of neurons per layer ity density of a large set of images using deep networks and shapes the output correlations. We demonstrate that the ensem- variational inference (Variational Autoencoder [3] or by ad- ble of such networks can generate the breadth of correlation versarial training (Generative Adversarial Nets [4]). However, structures similar to the ones observed over different types of for answering our questions, images generated in this way are natural images. We assess the attractiveness of the generated not suitable, as they do not allow to disentangle the effects images to human subjects and evaluate how it is connected of correlation statistics and concrete object interpretations. with the statistics of two-point correlations. II. METHODS images. The main measure of choice is the spatial correlation A. Generating images with CPPNs function which we efficiently calculate using a convolution method. We use Compositional Pattern-Producing Networks [5] In order to answer our research questions, we asked 45 (CPPNs) with random weights to generate images of size 512 human participants to select aesthetic images as detailed in pixels by 512 pixels. The architecture is illustrated in Fig. 1. Sec. II-B2. The correlation functions of these aesthetic images To explore a variety of image styles, the architectures of the were then compared to both the full dataset and to more CPPNs are varied. To allow for a controlled generation of restricted architecture subsets. Furthermore, a set of natural architectures we parametrize different aspects with five hyper- images (photographs of nature, cities, animals, landscapes and parameters. L 2 f3; 5; 10g is the parameter defining the total the like) are compared to the CPPN-generated dataset based in number of layers in the architecture and N 2 f100; 250; 500g their correlation function. To compute whether a certain set of is the total number of neurons, see also Fig. 1. images has a significantly different correlation function than An initial trial-and-error search suggested that interesting another set, we are using a Welch’s test (similar to a t-test but results can be obtained by varying number of neurons in each for samples of varying size and variance). layer. Thus, the following equation expresses the number of neurons n(l) in layer l as: 1) Correlation function: To calculate the correlation func- tion, the Pearson correlation coefficient Eq. 2 of each image is µl=L n(l) = CN Ne (α + sin(−!l)) : (1) calculated as a function of pixel-wise distance using only the µ 2 {−1; −0:5; −0:1; 0; 0:1; 0:5; 1g is the decay rate param- luminosity information (not taking into account colours). For eter that specifies how the number of neurons decreases (or example, for a distance of 1 pixel, all possible pixel pairs that X; Y increases), ! 2 {−2; −1; −0:5; 0; 0:5; 1; 2g is the frequency have a distance of 1 would be put into the two vectors parameter that allows us to introduce bottlenecks to the archi- where the Pearson correlation coefficient is calculated as: tecture, α 2 f2; 5g is a parameter that controls the strength of the oscillation terms, and CN is a normalizing factor that ensures the total number of neurons in the architecture is as E [(X − µX )(Y − µY )] ρX;Y = (2) close as possible to N while making sure that no layers have σX σY less than 2 neurons (which often results in a simple, solid E[XY ] − E[X]E[Y ] = (3) coloured image). pE[X2] − [E[X]]2pE[Y 2] − [E[Y ]]2 1) Network details: The layers are fully connected and each neuron also has a bias weight. Activation functions are hyperbolic tangents with cubed inputs (f(z) = tanh(z3)) in where E[X] denotes the expectation of random variable the hidden layers and a sigmoid for the output layer (for red, X, µX = E[X] and σX are the mean and the standard green, blue colour channels). Once a network architecture is deviation of X. Note that the larger the distance the fewer pairs defined, all weights are sampled from a normal distribution exist. Conceptually, we will use the correlation coefficients with mean 0 and standard deviation 1. An image is generated for a given distance, yielding a correlation function: distance by feeding in each pixel independently in form of (x; y; r) vs. correlation coefficient for each image. In the remaining of where x 2 [−1; 1], y 2 [−1; 1] are the 2D coordinates (with this section we elaborate on how to make these computations (0; 0) at the center) and r = px2 + y2 is the radius. efficient. Understanding this is not essential and might be Due to the multi-layer structure and random weights the skipped. network transforms the simple inputs and creates intricate FFT-convolution methods can be employed to quickly cal- images. To give an intuition, a particular network might give culate the terms in Eq. 3 without the need to extract pairs larger weights to the r component initially, giving rise to explicitly. In the simplest case, one can convolve an image with images whose structures are more circular and symmetric. its horizontally and vertically flipped version (using python Further layers act to distort the image. Stronger weights give numpy syntax): convolve(image, image[::-1, ::-1]), rise to noisier shapes and colours and smaller weights to to obtain a non-normalized version of a correlation coefficient. simple colours and shapes which tend to be smoother. However, to obtain the Pearson correlation coefficient, nor- 2) Generated image database: We generated a large dataset malization with respect to the number of pixels involved at of 35280 images, 40 images per 882 unique architectures. For each distance/angle, sample means and sample variances must each architecture configuration, 40 images are generated by also be computed as in the terms in Eq. 3. The number of changing the random number generator (RNG) seed. These pixels contributing to the correlation coefficient for a distance seeds are then re-used for different architecture configurations of x and y pixels horizontally and vertically is then given by to allow for easier comparison. N(x; y) = convolve(I, I) where I is a matrix of ones the same shape as the image. The sample means are defined as B. Image analysis µ(x; y) = convolve(image, I)/N(x,y) where we also In the following we detail the statistical tools and methods compute the flipped version and denote it as µ(−x; −y).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us