From March to June of 2021 I collaborated with the New York City Media Lab and CHANEL in the intersection between artificial intelligence and storytelling. The following are some of the results that Irene and I generated during this time.


 

CLIP + VQGAN + TOUCHDESIGNER

We experimented with CLIP and VQGAN to explore the power of image generations from prompts and with Touchdesigner to create more compelling results.

 

CHANEL is All You Need

Interpolation between images generated from the prompt “CHANEL is all you need”. With these experiments we tried to explore the knowledge that AI models had about CHANEL.

 
 

Feeling Happy, Feeling Lonely

Interpolation between images generated using Touchdesigner, CLIP and VQGAN with the prompts “Feeling Happy” and “Feeling Lonely”. With these experiments we explored ways to express emotions in images using AI.

 

Reactive Generations

We used Touchdesigner, Aphantasia and CLIP to create an interactive environment where we could use external elements to condition the final generations. These include trackpad positions, sounds, and other controls that tuned the parameters of the generation.

 
 
 

Penguins Wearing CHANEL

The following is a collection of different results generated with variants of the prompt ¨Penguins wearing CHANEL¨. These were created using VQGAN and CLIP.

 
 
 
 
Previous
Previous

MADE IN THE GENIVERSE

Next
Next

SEAS AND ROSES