VISUALISING SPECULATIVE MATERIALS: USING TEXT-TO-IMAGE PROMPTING TO ELABORATE LIVINGNESS AS A DESIGNED MATERIAL QUALITY

DS 131: Proceedings of the International Conference on Engineering and Product Design Education (E&PDE 2024)

Year: 2024
Editor: Grierson, Hilary; Bohemia, Erik; Buck, Lyndon
Author: Alan, Ali Cankat; Pedgley, Owain
Series: E&PDE
Institution: Istanbul Technical University, Turkiye; Middle East Technical University, Turkiye
Page(s): 306 - 311
DOI number: 10.35199/EPDE.2024.52
ISBN: 978-1-912254-200
ISSN: 3005-4753

Abstract

The democratisation of generative AIs has led to the emergence of novel paradigms in design. Varying capabilities of GenAIs, in terms of addressing numerous single and multiple modalities, have allowed designers to implement these tools efficiently in their workflow. GenAIs are used in various instances during the design process, such as research, ideation, visualisation, and reporting. Public GenAIs are often used with prompts. The prompts are the instructional input and descriptive data for a GenAI model to start working. Whilst prompts can be in different mediums such as text, images, videos, etc., the most common occurrence in the World Wide Web is text, which functions on large language models, processing the natural language. This ability to tell GenAIs what is needed is called prompting or prompt engineering—a developable skill of carefully crafting sentences and descriptive keywords to return a high-quality result. Designers are already prompting, and in an environment where new models of GenAI are being developed and made public each day, design students at various levels of their education have started using the power of GenAI for their daily design tasks. One area of great potential is text-to-image modelling, which opens the opportunity for GenAI to act as a fast visualisation tool. This paper presents an implementation of one of the most popular text-to-image GenAI models, Midjourney, as a part of an academic research through design (RTD) process. Midjourney is used as a visualisation tool for outcomes in a design fiction biodesign workshop focused on investigating new future cohabitation possibilities with living materials. In the workshop, design fiction was authored and initially communicated using narratives. Midjourney was then employed as a means to transform verbal storytelling into a visual medium that could more readily provoke design discussion based around feedback, plausibility and design iteration. The narratives were recorded with a voice recorder, analysed through CAQDAS, and converted into GenAI prompts by carefully selecting descriptive words and phrases tied to each participant’s storyworld. The success of the Midjourney implementation lies in the ability to bridge between the abstractness of fiction and the tangibility of material, as well as to visually contextualize design proposals in a future setting. Using GenAI, it was possible to quickly generate visual interpretations of living materials as boundary objects to provoke discussions on the merits and possibilities of livingness as a material quality. The results highlighted two critical takeaways: 1) in terms of design fiction, text-to-image GenAI models yield unexplored potentials for visualising narrative-based design outcomes and diegeses in a broader sense; and 2) in terms of materials for design practice and education, such models can help ease the communication of performative and experiential qualities of newly developed materials or new material proposals amongst key stakeholders.

Keywords: speculative materials, text-to-image prompting, design fiction, material qualities

Download

Please sign in to your account

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties. Privacy Policy.