Info   Blog  ALTTAB



Collective Vision of Synthetic Reality




participatory project / workshops
2020
           Collective Vision of Synthetic Reality is a participatory project that aims to collect and analyze the ideas, scenarios, absurd dreams, hopes and fears concerning the emergence of AI-generated synthetic media. The project operates as crowd-sourcing of the broad scope of visions from a diverse audience. The responses are collected and positioned on the spectrum of consequences, somewhere in between good and evil, small scale and large scale. The Collective Vision represents the average values of participants, the inclination towards utopian or dystopian thinking about the near future, the specific future scenarios of use cases for synthetic media as well as the “recipe” of their generation.

https://syntheticreality.hamosova.com/





           The workshops use the set of cards representing currently the most influential deep learning models and the rapid brainstorming to generate speculative scenarios of synthetic media. This activity puts responsibility for the future in the hands of participants and forces them to make fast choices based on their subconscious beliefs. They immediately get familiar with the new AI-driven tools, understand the way they work and actually get their hands dirty. By engaging in the process of media synthesis, the participants become active players of synthetic reality.

           This project challenges the binary understanding of good and evil intentions in the context of AI research, puts the problematics in different perspectives of creative industries and opens up a trans-disciplinary critical discussion.


workshop at Ctrl.AI Zine Fair Barcelona 2020

           The workshop was held for the first time at Ctrlz.AI Zine Fair in January 2020 in Barcelona, as an off-site event alongside the 2020 ACM Conference on Fairness, Accountability, and Transparency (FAT*), an interdisciplinary conference focused on understanding and mitigating the harms of data-driven algorithmic systems.
           The participants received a very short introduction in the current state of the art of AI-powered media synthesis and were given a set of educational cards. The current version of the card deck is divided into 7 categories of most accessible AI models for image synthesis, video synthesis, audio synthesis, text-based models, models for image recognition, post-processing and models useful in the media synthesis workflow. Some time was offered to the participants to browse the cards and acknowledge the potential of every AI model. The cards are specially designed to easily visually distinguish between different categories and understand the potential of each model within several seconds. Two brainstorming sessions follow after this initial phase of the workshop. Each group (equipped with their own card deck) has 10 min time to speculate on the potential future use-cases of synthetic media, as well as “prototype” the specific synthetic media example using the cards. The crafted scenario and a precise recipe for its production (which AI models were used, in which order, etc.) is documented on a workshop sheet, either by drawing or writing. We use matching color stickers for marking the categories of used AI models for easier further reference.


           One group for, example, imagined a future, where people will be able to visualize their memories. This could be beneficial in psychotherapy, but also in entertainment and advertising. The imagined workflow uses image and text generation based on a text input, combined with additional data input, voice synthesis and talking-heads generation. The idea of visualizing distant memories, which we no longer can visualize in our mind, is re-occurring during brainstorming sessions. No matter how stretched the actual process is from what’s possible at the moment, this speculation inspires us to think about positive use of synthetic media synthesis with large scale impact.
           The full report will be available on my Medium ︎



 



demo version of the workshop during the Hmm on Deepfakes ︎︎︎

           The online version of the workshop takes place in Miro board: