🎨 Lay-Your-Scene: Natural Scene Layout Generation with Diffusion Transformers

ICCV 2025
Project done while interning at UC San Diego.
Lay-Your-Scene Teaser

Figure 1: Lay-Your-Scene (shorthand LayouSyn) demonstrates superior scene awareness, generating layouts with high geometric plausibility and strictly adhering to numerical and spatial constraints. Object nouns in the prompts are highlighted with corresponding colors in the layout.

Abstract

We present Lay-Your-Scene (shorthand LayouSyn), a novel text-to-layout generation pipeline for natural scenes. Prior scene layout generation methods are either closed-vocabulary or use proprietary large language models for open-vocabulary generation, limiting their modeling capabilities and broader applicability in controllable image generation. In this work, we propose to use lightweight open-source language models to obtain scene elements from text prompts and a novel aspect-aware diffusion Transformer architecture trained in an open-vocabulary manner for conditional layout generation. Extensive experiments demonstrate that LayouSyn outperforms existing methods and achieves state-of-the-art performance on challenging spatial and numerical reasoning benchmarks. Additionally, we present two applications of LayouSyn. First, we show that coarse initialization from large language models can be seamlessly combined with our method to achieve better results. Second, we present a pipeline for adding objects to images, demonstrating the potential of LayouSyn in image editing applications.


Method

Lay-Your-Scene Architecture

Figure 2: Overview of inference pipeline for LayouSyn


We frame the scene layout generation task as a two-stage process:

  1. Description Set Generation: A lightweight open-source language model extracts relevant object descriptions from the text prompt. For example, if prompt is "Three people walking on the street", the model outputs a JSON with count of each object, i.e, {"person": 3, "street": 1}.
  2. Conditional Layout Generation: A trained, aspect-aware, diffusion model generates layouts conditioned on the text prompt and object descriptions directly in bounding-box space.


Qualitative Results

Qualitative comparisons between LayoutGPT+GLIGEN and LayouSyn+GLIGEN

Figure 3: Comparative analysis with LayoutGPT. In the first example, LayoutGPT produces a semantically incorrect layout, with the table and chairs not positioned under the lamp while our method follows the constraints precisely. In the second example, LayoutGPT generates a geometrically incorrect layout for the cat, whereas our method successfully understands the relationships between different objects and produces a correct layout.

Diversity of generated layouts and aspect ratio variations

Figure 4: Diversity of layouts generated by LayouSyn for the same text prompt.

Layout generation with varying aspect ratios

Figure 5: Layout generation with varying aspect ratios. Layouts generated at different aspect ratios for prompt: "A man riding a horse on the street." The model adjusts the position and aspect ratio of the man and the horse to produce natural-looking layouts.


Quantitative Results

We evaluate LayouSyn on two criteria:

  1. Layout Quality: We draw the generated layout as an image and map each object to a specific color following document layout generation literature, taking into account semantic similarity between objects based on CLIP similarity. We refer to this metric as L-FID (Layout-FID).
    ModelL-FID ↓
    LayoutGPT (GPT-3.5)3.51
    LayoutGPT (GPT-4o-mini)6.72
    Llama-3.1-8B (finetuned)13.95
    LayouSyn3.07 (+12.5%)
    LayouSyn (GRIT pretraining)3.31 (+5.6%)

    Table 1: Layout Quality Evaluation on the COCO-GR Dataset: Our method outperforms existing layout generation methods on the FID (L-FID) score by at least 5.6%

  2. Spatial and Numerical prompt-following ability: We evaluate our method on the NSR-1K benchmark and assess whether the generated layouts follow specified numerical and spatial constraints in the prompt.
Numerical ReasoningSpatial Reasoning
Prec. ↑Recall ↑Acc. ↑GLIP ↑Acc. ↑GLIP ↑
GT layouts100.0100.0100.050.08100.0057.20
In-context Learning
LayoutGPT (Llama-3.1-8B)78.6184.0171.7149.4875.4047.92
LayoutGPT (GPT-3.5)76.2986.6476.7254.2587.0756.89
LayoutGPT (GPT-4o-mini)73.8286.8477.5157.9692.0160.49
Zero-shot
LLMGroundedDiffusion (GPT-4o-mini)84.3695.9489.9438.5672.4627.09
LLM Blueprint (GPT-4o-mini)87.2167.2938.3642.2473.5250.21
Trained / Finetuned
LayoutTransformer *75.7061.6922.2640.556.3628.13
Ranni56.2383.2840.8038.1953.2924.38
Llama-3.1-8B (finetuned)79.3393.3670.8444.7286.6452.93
Ours
LayouSyn77.6299.2395.1456.1787.4954.91
LayouSyn (GRIT pretraining)77.6299.2395.1456.2092.5858.94

Table 2: Spatial and numerical reasoning evaluation on the NSR-1K Benchmark. LayouSyn outperforms existing methods on spatial and counting reasoning tasks, achieving state-of-the-art performance on most metrics. Note: * indicates metrics reported by LayoutGPT. We bold values for metrics where our method (GT) is 100% and underline values where methods exceed the ground truth performance.


Applications

  • LLM Integration: LayouSyn can be integrated with an LLM, using its planned layouts as initialization and refining them to achieve better performance with equal or fewer sampling steps. We demonstrate the improvement on the NSR-1K spatial reasoning benchmark:
    MethodLlama-3.1-8BGPT-3.5GPT-4o-mini
    Original75.4087.0792.01
    Description Set89.7590.0490.95
    Description Set + Inv (15)90.4692.3792.08

    Table 3: Spatial reasoning results with LLM initialization. We take the outputs from LayoutGPT with different LLMs (Original) and design two strategies: 1) Description set only: Use only the description sets predicted by the LayoutGPT and perform denoising starting from Gaussian noise with full 100 denoising steps; 2) Description Set + Inversion: in addition to using the description sets, apply DDIM inversion on the bounding boxes predicted by the LLM and denoise for the same number of steps as inversion

  • Image Editing Pipeline: LayouSyn enables automated pipeline for adding objects to images. The pipeline: (1) extract relevant objects from the prompt with a lightweight LLM, (2) detect existing objects in the scene with Grounding DINO, (3) complete the layout for the new object with LayouSyn, and (4) inpaint the object into the image with GLIGEN inpainting pipeline.
  • Image editing application

    Figure 6: Examples of automated object addition using LayouSyn.


    BibTeX

    
        @article{srivastava2025layyourscenenaturalscenelayout,
            title={Lay-Your-Scene: Natural Scene Layout Generation with Diffusion Transformers}, 
            author={Divyansh Srivastava and Xiang Zhang and He Wen and Chenru Wen and Zhuowen Tu},
            year={2025},
            eprint={2505.04718},
            archivePrefix={arXiv},
            primaryClass={cs.CV},
            url={https://arxiv.org/abs/2505.04718},
        }