Scrying through AI
series of workshops, 2021

Series of participatory design workshops using synthetic verbal collaging method during group visioning process.
This method - creating a collage with text-to-image AI generative tool - is using back and forth translation between images and text as a stimulus for imagination and offering empowerment for participants who are not visually-oriented and skilled in working, , creating or thinking with images.


Through this method, artificial intelligence is playing an active role of a non-human agent mediating the visualisation process for people. However, AI is not used only as a visualisation tool. Interacting with AI in forming a vision successfully helps in diverging from the pre-lined tracks of human imagination.

In January 2021, OpenAI published a demo of DALL-E, a tool that generates (almost) photorealistic images based on a simple text caption. Text-to-image neural networks have existed before (e.g., the freely available AttnGAN model), but in 2021 they have brought a significant leap in the quality and realism of the generated output. Although DALL-E is not freely available, other open source versions quickly emerged that achieved similar results and became a sensation among "creative AI" enthusiasts. What can these generated images tell us about our world? Can generating images from text be anything more than an addictively entertaining cabaret?




It turns out that more and more generative AI tools are converging towards interaction via text input, and the essence of designing thus clearly points towards the need to master the ability of so-called “prompt engineering” (=creating effective text input). Although it may sound simple, creating the right text prompt is not a completely trivial matter and must take into account many different factors. There is friction between visual and verbal representation, between human and algorithmic logic, between our cultural references and the statistical representation of our visual culture by artificial intelligence. Can this constant translation between the verbal and the visual, between our language and computer code, between ourselves and artificial intelligence, teach us anything?

In addition to an introduction to synthetic images and a chance to try out text-image synthesis to visualize difficult-to-visualize concepts, this workshop also includes an experiment with the "verbal collaging" method. We will be interested in the extent to which artificial intelligence can influence our imagination and shape our thoughts and ideas, what images created in this way can tell us, and what meaningful applications this approach can bring to our own artistic work.



