L125 : Generative AI for AEC: Myth or Potential? with Theodore Galanos


Episode Artwork
1.0x
0% played 00:00 00:00
Apr 04 2024 83 mins  

#ai #aec #dl #generativedesign #aectech Theodore Galanos joins the podcast for an insightful discussion around artificial intelligence and technology. Key topics covered include recent developments in generative AI models such as DALL-E, Stable Diffusion and Nerf-based generation. There is an emphasis on the shift towards leveraging language models and instruction tuning to create personalized and customizable AI. For example, Galanos explains how techniques like chain-of-thought prompting allow decomposition of complex design tasks through step-by-step natural language instructions. Other notable topics include the democratization of asset creation through AI, training models with expert knowledge, designing intelligent interfaces, and overcoming obstacles around collaboration, data formats and workflows. Galanos provides perspective on how architects and designers can participate in advancing AI for the AEC industry, stressing the importance of cross-disciplinary collaboration. He concludes with excitement around the potential for 'intelligent design' systems that can understand tasks and requirements with no formal training.00:00 - Introduction00:22 - Background and recent work01:00 - Discussion on generative AI models like NERF01:30 - Thoughts on current state of generative AI02:00 - Expanding scope beyond just design artifact generation02:58 - Using human feedback to train models03:57 - Democratizing access to generative design04:48 - Architect model training with planning prompts05:15 - Language models changing interfaces06:21 - Guidelines for architecture firms adopting ML07:30 - Codifying non-linear design processes08:20 - Capturing design diffs to train models09:34 - Software collaboration challenges with ML10:57 - Bottlenecks between AI models and software11:38 - PDFs losing information, needing structured inputs12:55 - Low hanging fruit for trying ML for architecture firms13:39 - Power imbalance between AI APIs and startups building on them14:16 - Thoughts on AI trends relevant for architecture15:38 - Top ML papers to read16:04 - Recommendations for insightful podcasts16:56 - Upcoming conferences/events of interest18:01 - Using Luma ai to capture 3D scans for generative models19:41 - Evolution of interest from GANs to language models21:18 - Using competitions to create datasets of design processes23:28 - Modular interfaces to swap AI models24:43 - Business model of AI infrastructure vs startups using APIs26:34 - Article on AI trends including digital twins28:22 - Interface challenges for collaborative intelligent design30:51 - Linearizing nonlinear design processes32:38 - Losing information when exporting to PDF36:12 - Chain of thought prompting38:38 - Robotics applications of language models40:00 - Fusion models replacing GANs41:01 - Scaling through simplicity like language models42:53 - Domain expertise needed to extend capabilities of language models43:20 - Workflow integration needed for human annotation44:16 - Capturing full sequence of design interactions46:07 - Analyzing design process logs from competitions48:13 - Scaling data collection through model training49:09 - Existing project steps like design generation, review, validation50:53 - PDFs losing semantic information52:02 - Capturing callout sequences during design coordination53:04 - Massive data needs of generative pretrained models vs tuning models54:06 - The challenge of perfectly extracting information from PDFs55:51 - Using common formats from the start for future ML readiness57:05 - Implementing processes for proper data strategies

--- Send in a voice message: https://podcasters.spotify.com/pod/show/mayur-m-mistry/message