Picture yourself on a boat in a river with tangerine trees and marmalade skies. Throw in a dog that looks like a slug and you get Google’s artificial neural network “Deep Dream”.

Lucy in the Sky with lots of eyes
Two weeks ago, Google’s research team featured in their blog a visualization tool designed to understand how neural networks work and how to replicate them artificially.
Artificial neural networks are learning models essential in machine learning. Derived from biological central nervous systems, they are used in image classification and speech recognition by feeding them large amount of input data to “train” them.

In the past, classifying images into categories was nearly impossible but advances in cognitive science made it possible for machines to distinguish images with a high degree of accuracy.
Say you want train a machine how a dog looks like. By feeding it dog images, larger quantity for higher accuracy, it could be trained to spot dogs in images, tell if there aren’t any, or say it is unsure.

such algorithm
But what do these machine see? A psychedelic trip apparently.
The research team explained in their post “We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows.”
“For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees. ”

Accompanying image from Google’s post
The Google Research team found out that feeding the algorithm iteratively with its own output yielded interesting results.
Check out some of the psychedelic images generated by Google’s Deep Dream below:

From Lincoln Harrison’s Startrail Gallery

Nickelodeon’s Patrick Star

Jackson Pollock’s “No. 5”

A Sunday Afternoon on the Island of La Grande Jatte

That frigging dress

An eye with eyes

Guardians of the Galaxy

much wow

Van Gogh’s Starry Night

Edvard Munch’s The Scream




Images from Research at Google
Why are there a lot of animals?
According to Google’s Research team, this particular algorithm was trained with a large number of animal images and naturally, it is likely to interpret shapes as animals. The data is stored in high abstraction which results to the creation of hybrid animals.

Creepy Pasta. Literally.
So it’s not really limited to dogs but rather to data sets fed to the code. Here’s a video of someone using MIT’s Places CNN.
So how do you make your own?
Google’s Research team made their visualization code public after gaining a great amount of interest from programmers and artists alike. Check out their GitHub post here.

Oh yeah
The code can be applied on both static images and videos. Check out this video that uses the code iteratively. Each frame is recursively fed back to the network starting with a frame of random noise. Every 100 frames (4 seconds) the next layer is targeted until the lowest layer is reached.
No worries if you are not programming-savvy. You can submit your images at http://psychic-vr-lab.com/deepdream/ or http://deepdream.pictures/static/ which use the same code to generate these trippy images.
Sources:
Inceptionism: Going Deeper into Neural Networks
DeepDream – a code example for visualizing Neural Networks
Do you have Deep Dream images you generated? Post them in the comments below!