Designers are modern-day inventors. We find problems and opportunities, then use our creativity combined with technology to solve or grasp them. And those problems/opportunities include the ones we stumble upon in our own creative process. In the early days the core tools might have been simply pencil and paper that enabled us to capture our thoughts and refine our ideas in more detail. Much later, we have full blown software suites that help us pour our ideas straight from our brains onto the screen and into the world. Now, with AI on the rise, have we reached the moment where we let our tools take over even the creative process? Or is it simply another medium for (visual) creation? And to what extend does this new technology influence our thoughts & ideas? in other words:
What is the role that AI might play in my creative process as a designer?
In my opinion there is only one way to find out: to explore! So that’s what I set out to do, and share with you here.
Disclaimer: my sole focus for now is on AI that generates visuals and aesthetic elements. So if you are looking for AI generated poems, musical compositions or code — this is not the place, but maybe we will explore that in a future article 😉
One common way to use AI in the world of visual design is through a style transfer model. With style transfer you apply a style, let’s say van Gogh’s starry night sky, onto an image, e.g. your own portrait picture. The result is a portrait of yourself, as if van Gogh painted it.
How this works is that you start by training an AI to understand what makes a van Gogh painting recognisable as such. i.e. Large brush strokes, flow-y patterns, bright colours, etc.. Then, you ask the model to apply that same principle to ‘paint’ your portrait. And like a true forgerer, the AI will transform your facial features into a lively van Gogh-like painting. Only instead of taking a few years of practise and days to paint, the AI takes hours to train and can spit out new paintings within seconds.
As a designer there are two moments to apply your creativity. First, you make a choice on which style to train the AI on: famous painters like Kandinsky or van Gogh are often used, as their style is so very recognisable. But why not feed it an picture of Karl Blossfeldt? Or better yet, if you’re an artist yourself: why not use one of your own artworks?
Second, you choose which image to apply the style to. You may choose something personal, like your own picture. An object that you are intrigued by. Or simply an image that you feel will generate a visually pleasing result.
So what is the value of this? One perhaps obvious application is social media, where it can make for some interesting filters. But that’s not particularly stimulating for me as a designer.
I think for us designers, it can be a source of inspiration as well as a method of creation.
At first, it may feel like a cheap or lazy method to use, as it lacks the traditional craftsmanship. But there is still plenty of room for creativity; it takes skill to find the right content and style images to create an output with the desired effect. And as it takes little time to create, it is inviting to experiment. This could make the design process more iterative and playful as compared to some of the more traditional art techniques.
For my experiments I used Google Colab
In contrast to what I refer to as the stylist AI, who will paint any subject you introduce, the digital clone creates novel work ‘from scratch’: after training, all it takes as input is noise (so called ‘vectors’)
To experiment with this principle, we at DEUS decided to try and resurrect Da Vinci by training an AI in his style. In doing so we created the digital twin we call d’AI Vinci.
Here I think it’s interesting to dive a little into the process:
For this experiment we chose to make use of a no-code tool called Runway ML. This tool allows us to focus on the creative input, rather than spending most our time on GitHub tutorials.
But the creative process starts not with runway, but with data collection. In our case: collecting a shitload of images from da Vinci’s sketches. Although much of his work is available online for free, there is no easy way to download al his images at once. Hence we scraped a couple of websites, downloaded some images manually, and even screenshotted some files. Although this is not the most efficient way of data collection, it wasn’t so bad either, and allowed us to have a bit of quality control on which images to feed the AI.
Then it was time for training the model. In Runway, you have the option to work with a pretrained model, which is great because that reduces the training time immensely (hours instead of days).
The first round of training, with a model pretrained on bird illustrations, was already promising. The notable colour pallet and hatching pattern was clearly visible, and even his signature mirrored scribbles were appearing. But the quality of the image was quite poor.
From doing a little online research, it seemed that the model pre-trained on faces might yield better results. And indeed, the quality seemed to improve a little, but still not quite what we were hoping for.
That's why we decided to further refine the training data: making sure to collect only high quality images, and extending it with some newly found anatomy sketches. And that payed off! As you can see the resulting images are more defined, and you can clearly see some bone and muscle structures appearing that likely originate from the anatomy studies we added.
What I find intriguing is the fantastical aspect of the sketches. Although shapes are defined, most of them are not recognisable as something out of our world. Although I don’t actually think the AI is consciously trying to draw anything in particular, it is an interesting thought experiment to assume it does. and ask yourself: what subject is the AI trying to draw? If these are pages in the AI’s notebook, what does that say about the world it lives in?
From a practical standpoint, I think this type of AI is can be very inspirational. For an artist, I think it can be interesting to train an AI on their work, to explore what makes their style unique. Also, the results can be pieces of art on themselves, or serve as inspiration for new pieces.
This does trigger the need for a conversation around copyright. Because: who owns the artwork generated by the AI? the person that trained the model, the engineers who built it, the artist, the AI itself? And if I wanted to create a digital clone from a (living) artist I admire, would I need their permission to do so?
Another way that AI can support the creative process is by utilising its power to generate many variations in an instant. You might have heard of project dreamcatcher, a generative design system that enables designers to explore trade-offs between many alternative approaches and select design solutions for manufacturing. While that is a very practical application, I wanted to explore how AI can inspire us in a purely visual manner with its many variations. Like in this chair experiment from Philipp Schmitt and Steffen Weiss.
There’s no fun in designing more chairs. But why not train an AI Sneaker designer? After collecting a whole lot of sneaker designs from the Nike webshop, and let runway train for approx 3 hours, I was ready to see some AI generated shoes. it was good! But…. While I quickly found out that the model seems very successful in generating realistic sneaker, I would not call it inspirational. Not by a long shot. Rather, it’s just more of the same — which, when you think about it, is to be expected.
Imagine you take a human, fresh out of the womb, and show it only pictures of glasses, then ask it to draw glasses. What do you think it will come up with? I strongly believe our creativity comes from our ability to combine knowledge and experiences from different parts of our lives to make new, unexpected, meaningful connections. So why would an AI be any different?
Determined to train my AI to be more creative,I decided to untrain it. Similar to the methods we use in workshops with our clients, where we invoke creative thinking by forcing ‘random links’ with seemingly unrelated cases, technologies or topics, I asked my AI to mix car & shoe designs, thinking it might produce some inspiring new perspective on what shoes (or cars) might look like.
You’ve probably heard of AI being able to create photorealistic images of people that don’t actually exist. And, as you’ve seen with the shoe experiment, AI is able to create photorealistic images of virtually anything — as long as you have enough images to train it with. Need a cat? there you go. need a fictional city plan? no problem. And in contrast with the artistic AI, I doubt the creators care much about the copyright of the produced images.
But how can I use this to create visual designs? While I find the realism impressive, I am more attracted to the abstract shapes that leave room for interpretation, and the ‘happy accidents’ that showcase the AI’s naive understanding of how the world works. That’s why I like to look at the intermediate learning stages, where you can see it starts to understand the rules, but hasn’t put all the pieces of the puzzle together just yet.
As a fan of nature, I thought I’d introduce my AI to some beautiful stock images of leafy backdrops. And the result was quite mesmerizing!
After playing around with video editing, I’m quite happy with the ‘fantastic flora’ me and AI created.
Like any new technique, experimenting with AI has opened my mind to new possibilities, and helps me see new solutions to existing problems. The more I play around, the more I understand the possibilities and limitations, and can better anticipate what will give me the results I’m looking for.
I’m sure to continue exploring, and if you haven’t already, I suggest you do too! ✨