Full Program »
99: Beyond Visual: Evidencing Shared Atmospheric Perception With Generative Ai
This study investigates whether the affective quality of architectural representations can be consistently interpreted across visual and auditory modalities. A proof-of-concept experiment was conducted using a hybrid human-AI protocol to generate and curate abstract architectural images and soundscapes based on a four-dimensional affective model. A specialized sample of 12 participants with architectural training performed a matching task with a curated subset of these stimuli. Inter-rater agreement was measured using Fleiss’ kappa, yielding a statistically significant moderate consensus, substantially exceeding what would be expected by chance. The result provides preliminary support for the hypothesis that the imaginative construction of atmosphere from media is not purely idiosyncratic, at least within this specialized cohort. While limited by the sample and stimulus selection, this study establishes a methodological baseline for empirically investigating the shared nature of atmospheric perception.
