Evaluation of Different Large Language Model Agent Frameworks for Design Engineering Tasks

DS 130: Proceedings of NordDesign 2024, Reykjavik, Iceland, 12th - 14th August 2024

Year: 2024
Editor: Malmqvist, J.; Candi, M.; Saemundsson, R. J.; Bystrom, F. and Isaksson, O.
Author: Pradas Gomez, Alejandro; Panarotto, Massimo; Isaksson, Ola
Series: NordDESIGN
Institution: Chalmers University of Technology, Sweden
Page(s): 693-702
DOI number: 10.35199/NORDDESIGN2024.74
ISBN: 978-1-912254-21-7

Abstract

This paper evaluates Large Language Models (LLMs) ability to support engineering tasks. Reasoning frameworks such as agents and multi-agents are described and compared. The frameworks are implemented with the LangChain python package for an engineering task. The results show that a supportive reasoning framework can increase the quality of responses compared to a standalone LLM. Their applicability to other engineering tasks is discussed. Finally, a perspective of task ownership is presented between the designer, the traditional software, and the Generative AI.

Keywords: Artificial Intelligence (AI), Design Cognition, Large Language Models (LLM), Design Automation

Download

Please sign in to your account

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties. Privacy Policy.