Making art through art: a ML approach for generative art

Juna Salviati
4 min readJul 6, 2022
Photo by Raimond Klavins on Unsplash

Performing generative art with Tensorflow and a GAN. Full source code: https://github.com/antigones/py-truchet-gan

“Generative art” is the term referring to the process of creating art by means of automation and is usually performed with algorithms.

In this post, we will explore machine learning generative models to create new artistic images (Truchet tiling images) starting from an algorithmically generated set of samples.

Truchet tiles

Jean Truchet first introduced Truchet tiles in 1704. Those tiles are split along the diagonal as a couple of triangles in black and white colours.

Image from Wikipedia

Truchet tiles can be used to tessellate a plane in various ways and are sometimes used as a subject in generative art.

Example of Truchet tile plane tessellation

Generating Truchet tiles

To generate a Truchet tiled image, we can start generating a single image:

Here, “create_base_tile”, creates a base Truchet tile with a defined foreground colour, background colour and size, producing the following image:

simple base Truchet tile

Painting a Truchet means creating a set of base tiles of a defined size, to tile randomically the plane with.

We just introduce randomness in rotation by rolling a 3-face dice and performing a tile rotation while tessellating the plane.

We will use this simple algorithm to feed a GAN with training images.

Developing a GAN to generate new Truchet tilings

A GAN (Generative Adversarial Network) is an architecture composed of two neural networks training “in tandem”.

The first network is called a “generator”: the role of a “generator” is to produce images looking real; the latter is called a “discriminator”, instead: it is a CNN used to distinguish real samples (from the training set) from fake (“generated”) ones.

Generator

The generator starts with a random sample image, upscaled to the size of the input samples. Upscaling here is performed via Conv2DTranspose layers, where stride is used to double the size of the image two times to target the input size (in the following snippet of code we upscale from 7*7 to 28*28):

Discriminator

The discriminator is a CNN classifier:

Losses

We compute two losses: one for the generator and one for the discriminator.

The generator loss quantifies the ability of the generator to produce realistic images, so we compare the two probability distributions, the one where all the samples are fakes (generated) against the one computed by the discriminator as fake on on a fake input set.

The more similar the two distributions, the lesser the cross-entropy.

The discriminator loss takes into account both real_loss and fake_loss. Real loss is the cross-entropy on the distribution of real samples and the ones predicted by the discriminator, while the fake_loss is computed as the cross-entropy between a vector with all zeros and the fake_output as it is predicted by the discriminator (0 = fake).

Training the GAN

In a single training step, we use the generator to generate new images and use the discriminator to check images from the training set and the images obtained with the generator. Losses are calculated using those predictions (and the total_loss is defined as the sum of the generator and the discriminator loss).

Then gradients of the generator and of the discriminator are calculated with the “gradient” function, using the informations in the context kept by GradientTape(); at the very end of the training step, the optimizer will apply the gradients to the variables using “apply_gradients”. In fact, the role of the optimizer is to update the model parameters, according to the information gained through the loss function.

Here we use Adam optimizers with a learning rate of 1e-4:

The train function applies the training step to each and every batch of the training set, actually performing training.

In the same function, losses are written to the log file to be displayed in tensorboard, the model is saved at predefined epochs intervals and frames for the training process GIF are saved.

At the end of the training process, generator is used to produce new images. In the following snippet, we generate 50 images:

An additional step for image refinement is shown in comment (e.g. to better discriminate black and white pixels, reassigning gray ones).

The following GIF shows the GAN training process:

Truchet tiling GAN training process

Results

Results after 300 EPOCHS, with a BATCH_SIZE=128, seem to be quite good:

Generated Truchet tiling images

Generated images show in fact heterogeneous results!

--

--

Juna Salviati

Full-time Human-computer interpreter. Opinions are my own.