SYNOPSIS
EmoGenAi is a web-based application that integrates three AI-driven modeling methods: probabilistic predictive models, emotion analysis of human feedback, and text-to-image generative AI. It is designed to support students and design practitioners in exploring data-informed approaches to architectural decision-making, as well as creatively experimenting with architectural theories and techniques.
KEY FEATURES
EmoGenAi is comprised of three core components, each designed to support AI-driven modeling methods:
PredictiveAi is a component within EmoGenAi app dedicated to developing predictive models using a range of machine learning techniques, including classification, regression, and clustering. It enables users to train models on structured datasets—such as environmental performance metrics, spatial characteristics, or user behavior patterns—to uncover relationships, trends, and hidden patterns. Through classification, PredictiveAi can categorize design options based on predefined labels (e.g., high vs. low performance); regression allows for predicting continuous outcomes (e.g., daylight levels or energy use); and clustering helps identify natural groupings within the data, revealing emergent typologies or user preference patterns. This predictive capability supports data-informed decision-making, helping designers evaluate the potential impact of their choices before physical implementation.
EmotionAi performs context-based emotion analysis by leveraging large language models (LLMs) to interpret user-defined sentiments. Instead of relying solely on preset categories, users can define a custom list of emotion or sentiment-related terms relevant to their specific project or design goals. The LLM then analyzes human feedback—such as survey responses, user comments, or design critiques—within its contextual meaning, identifying nuanced emotional cues and aligning them with the user’s predefined framework. This flexible, adaptive approach allows for a deeper understanding of users perspectives and emotional responses in diverse design contexts.
ArchGenAi is another core component of EmoGenAi, designed to generate architectural forms and spatial compositions through text-to-image synthesis. Powered by an LLM-based algorithm, it interprets user-defined prompts and translates them into visual outputs by incorporating a wide range of architectural parameters—including, but not limited to, architectural theory and history, building components, structural systems, design frameworks, conceptual approaches, and more. By leveraging generative AI, ArchGenAi enables users to visually explore abstract ideas, test formal variations, and experiment with diverse design logics in a rapid and intuitive way. This supports both creative exploration and critical reflection, bridging conceptual thinking with visual imagination in architectural design.
UOCOMING WORKSHOP
A virtual workshop will take place over three sessions —predictive modeling, emotion analysis, and generative design, each focusing on one of the core AI-driven components. Participants will engage with real-world data, hands-on activities, and guided exploration using our EmoGenAi platform to apply this tool within architectural design and planning fields: Session 1: AI-Informed Modeling for Predicting Building Performance ScenariosFRIDAY NOV FRIDAY 7TH 2025 @ 2 PM – 5 PM (CT)
Session 2: Understanding Design Context though Big Data and AI-Interpreted Human Feedback Using LLMsSATURDAY NOV 8TH 2025 @ 10 AM – 1 PM (CT)
Session 3: LLMs in Generative Design: Theoretical Foundations for Developing Architectural FormSATURDAY NOV 15TH 2025 @ 10 AM – 1 PM (CT)
To register, participants must use their affiliated institutional email address. Registration details is provided below: