AI Street View

For the weekly task of Street View, we were asked to look through any images and create a theme (space /place / colour/ urban / rural), choosing 6-8 images to create a narrative (non) /theme within any genre, fashion, documentary, surreal, abstract, voyeur, domestic POP.

We were introduced to a few arists that create in the conext of a street view: Jacqui Kennedy, Michael Wolf, Mishka Henner, Doug Rickard,Thomas Ruff and Richard Prince.

I watched a short video on work of Doug Rickard.

When I thought about digging through any images, the first thing that came to my mind, was something I heard about a while back, and was always at the back of my mind, simultaneously fascinating and scaring me, I am talking of course, of google dreams. After a bit of research, I found a good article online in the Guardian, by Alex Hern. I found the concept extremely interesting and decided to complete this weeks task with an AI Street View. I used a deep dream generator to convert images of New Mills into AI versions in different styles. I think the results are incredible, and the mind boggles in disbelief and amazement at our technological advance. The fact that artificial intelligence can and does dream is equally impressive as it is unnerving. I wonder about our future in the world of Artificial Intelligence, at the moment it’s all quite exciting, but I hope that science won’t fail our world by taking things too far. In the meantime, I had some innocent and carefree fun creating these images using dream generator form this website: https://deepdreamgenerator.com/generator

Below I attach the aformentioned article from The Guardian, which explains AI dreaming: https://www.businessinsider.com/what-ai-inceptionism-dreams-look-like-2015-6?r=US&IR=T

Deep Dream is computer program that locates and alters patterns that it identifies in digital pictures. Then it serves up those radically tweaked images for human eyes to see. … You can upload any image you like to Google’s program, and seconds later you’ll see a fantastical rendering based on your photograph.

google ai dreams

What do machines dream of? New images released by Google give us one potential answer: hypnotic landscapes of buildings, fountains and bridges merging into one. The pictures, which veer from beautiful to terrifying, were created by company’s image recognition neural network, which has been “taught” to identify features such as buildings, animals and objects in photographs. They were created by feeding a picture into the network, asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition. At a low level, the neural network might be tasked merely to detect the edges on an image. In that case, the picture becomes painterly, an effect that will be instantly familiar to anyone who has experience playing about with photoshop filters:

google ai dreams

But if the neural network is tasked with finding a more complex feature – such as animals – in an image, it ends up generating a much more disturbing hallucination:

knight ai google dreams

Ultimately, the software can even run on an image which is nothing more than random noise, generating features that are entirely of its own imagination. Here’s what happens if you task a network focused on finding building features with finding and enhancing them in a featureless image: The pictures are stunning, but they’re more than just for show. Neural networks are a common feature of machine learning: rather than explicitly programme a computer so that it knows how to recognise an image, the company feeds it images and lets it piece together the key features itself.

But that can result in software that is rather opaque. It’s difficult to know what features the software is examining, and which it’ has overlooked. For instance, asking the network to discover dumbbells in a picture of random noise reveals it thinks that a dumbbell has to have a muscular arm gripping it. The solution might be to feed it more images of dumbbells sitting on the ground, until it understands that the arm isn’t an intrinsic part of the dumbbell. “One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer may look for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, such as a door or a leaf. The final few layers assemble those into complete interpretations – these neurons activate in response to very complex things such as entire buildings or trees,” explain the Google engineers on the company’s research blog.

“One way to visualise what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation,” they add. “Say you want to know what sort of image would result in ‘banana’. Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana.” The image recognition software has already made it into consumer products. Google’s new photo service, Google Photos, features the option to search images with text: entering “dog”, for instance, will pull out every image Google can find which has a dog in it (and occasionally images with other quadrupedal mammals, as well). So there you have it: Androids don’t just dream of electric sheep; they also dream of mesmerising, multicoloured landscapes.

Couple of other articles I found insightful on the subject.

https://qz.com/432678/the-dreams-of-googles-ai-are-equal-parts-amazing-and-disturbing/

https://www.iflscience.com/technology/artificial-intelligence-dreams/

A very good, short explanation on YouTube channel, What is Google DeepDream? | Darkology #21 (2017).

Few more exaples of AI dreams.

‘Reference List’

‘A New American Picture’. Rickard D. Pier 24 Photography. (2010) [Online video] https://pier24.org/video/doug-rickard-a-new-american-picture/. [Accessed 10/02/21].

‘What is Google DeepDream? | Darkology #21’. Bluelavasix. (2017). [Online video] https://youtu.be/f23vEzI3LQ0. Available through YouTube. [Accessed 10/02/21].

Published by Elzbieta Skorska

My name is Elzbieta Skorska. I am a second-year photography student degree at MMU.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

SHAUN AZRAK

Abstract ventures in Celtic Futurism.

Ela Skórska Photography

Work in progress, progress at work.

Discover

A daily selection of the best content published on WordPress, collected for you by humans who love to read.

The Atavist Magazine

Work in progress, progress at work.

The WordPress.com Blog

The latest news on WordPress.com and the WordPress community.

%d bloggers like this: