Clicky

SYNTHIA

Novel Concept Design with Affordance Composition

1University of Illinois Urbana-Champaign, 2University of California Los Angeles
*Equal contribution
ACL 2025 Main
Figure 1. Overview of EMBODIEDBENCH.
Figure 1. Overview of EMBODIEDBENCH.

Examples of Novel Concept Design Generated by SYNTHIA and Baseline Text-to-Image Models
Using Similar Affordances (left) vs Dissimilar Affordances (right).

Overview


We introduce SYNTHIA, a framework for concept synthesis with affordance composition that generates functionally coherent and visually novel concepts given a set of desired affordances. Unlike prior works relying on complex descriptive text to generate stylistic variations or aesthetic features, SYNTHIA leverages affordances--defined as the functionality offered by an object or its parts---as a structural guide for novel concept synthesis.

Figure 1. Overview of SYNTHIA.
Figure 1. Overview of SYNTHIA.

SYNTHIA consists of three key stages: (1) Affordance composition curriculum construction, (2) Affordance-based curriculum learning, and (3) Evaluation. In the first stage, we build a training curriculum through sampling affordance pairs from our ontology by gradually increasing the affordance distances. Using our curriculum, we fine-tune T2I models, where they first learn concept-affordance associations from easy data, then integrate multiple affordances into a single functional form from hard data. We employ a contrastive objective with positive (affordances), negative (concepts) constraints, and corresponding images, enforcing visual novelty different from existing concepts. Finally, we evaluate models through automatic evaluation and human evaluation with four metrics: faithfulness, novelty, practicality, and coherence.

BibTeX

        @article{ha2025synthia,
          title={SYNTHIA: Novel Concept Design with Affordance Composition},
          author={Ha, Hyeonjeong and Jin, Xiaomeng and Kim, Jeonghwan and Liu, Jiateng and Wang, Zhenhailong and Nguyen, Khanh Duy and Blume, Ansel and Peng, Nanyun and Chang, Kai-Wei and Ji, Heng},
          journal={arXiv preprint arXiv:2502.17793},
          year={2025}
        }