text-to-image synthesis
[neural image synthesis on sentences from my dream journal]

The challenging practice of "painting from memory" got me wondering how our brains actually store visual memories. Do we remember the words we use to describe a scene? Or do we recall a blur of colors and piece it together based on how we think familiar objects should look?

I'd been reading some recent research on text-to-image synthesis using Generative Adversarial Networks trained on a series of images, each one paired with captions describing the contents of the image. The model reads in the text of the sentence, using an encoder to compress the description, and learns how to generate full images from the encoded descriptions.

After getting the model up and running, I generated several thousand images from combining descriptive sentences or phrases from a dream journal I keep, and injecting random noise to create a wide set of options for a small number of input phrases. Looking through all the files took too long, so I randomly explored them and made 8" x 8" alla-prima paintings of whichever ones caught my eye.

While tired from the 2-3 weeks spent researching and debugging the neural network code to generate these images, I was too exasperated to bother annotating several thousand files with their original input captions. I haven't yet looked back at which phrases were used to generate each image, but somehow it feels appropriate to leave part of the surprise for later.