Full Program »
154: Activating Pixels: Workflows Connecting Genai To Physical Output
Activated Pixels explores generative AI as a collaborative design tool, focusing on methods for converting 2D images into 3D models. With rapid advances in 2D-to-3D generation and access to open-source tools, the project evaluates which platforms and techniques best support iterative design workflows. It develops hybrid digital-to-physical processes linking visual language and form-finding with fabrication, translating digital models into buildable forms. Comparative testing across Midjourney, ComfyUI, Runway, Luma AI, and emerging models like Hunyuan 3D-2.0 examines prompt strategies, tool interoperability, and advances in mesh and texture generation. Sculptural precedents provide a testing ground for translating scale, geometry, style, and cultural references into three dimensions, with future directions extending toward architectural applications. Emphasizing mesh adaptability, optimization, and NURBS conversion, the research proposes replicable workflows connecting AI-driven speculation with material realization. Early results highlight creative possibilities alongside technical constraints, underscoring the need for scalable, reproducible methods bridging digital imagination and built reality.
