Pattern images are everywhere in the digital and physical worlds, and tools to edit them are valuable. But editing pattern images is tricky: desired edits are often programmatic: structure-aware edits that alter the underlying program which generates the pattern. One could attempt to infer this underlying program, but current methods for doing so struggle with complex images and produce unorganized programs that make editing tedious. In this work, we introduce a novel approach to perform programmatic edits on pattern images. By using a pattern analogy—a pair of simple patterns to demonstrate the intended edit—and a learning-based generative model to execute these edits, our method allows users to intuitively edit patterns. To enable this paradigm, we introduce SplitWeave, a domain-specific language that, combined with a framework for sampling synthetic pattern analogies, enables the creation of a large, high-quality synthetic training dataset. We also present TriFuser, a Latent Diffusion Model (LDM) designed to overcome critical issues that arise when naively deploying LDMs to this task. Extensive experiments on real-world, artist-sourced patterns reveals that our method faithfully performs the demonstrated edit while also generalizing to related pattern styles beyond its training distribution.
Editing pattern images is inherently complex because patterns are structured by rules that govern their layout and composition. Tiling patterns, for example, rely on alignment and repetition, while retro-style designs are defined by spatial divisions and fills. Designers often aim to adjust these underlying organizational rules rather than make superficial, pixel-level changes—what we term as programmatic edits. These edits manipulate the structural logic of a pattern rather than its surface appearance, requiring a fundamentally different approach to editing than traditional image manipulation techniques.
A common strategy for programmatic edits is Visual Program Inference (VPI), which involves inferring a program that replicates an image and then editing it by adjusting program parameters (check out our previous work for performing VPI). However, this approach is challenging for patterns, as they often combine rule-based logic with non-parametric components. For instance, the arrangement of elements in a tiling pattern may follow explicit rules, while the elements themselves do not. Moreover, VPI-generated programs can be poorly structured, with unlabeled parameters that make editing cumbersome.
Instead of solving the harder problem of program inference, we propose an alternative approach based on analogies. By providing a pair of simple example patterns that demonstrate a desired transformation, users can specify which structural property to edit and how to modify it. Our system employs a conditional generative model, TriFuser, to execute the edit on a target pattern, preserving its other structural features. Unlike prior work that applies analogies to appearance changes, our method uniquely enables structure-aware, programmatic edits, expanding the scope of analogical editing in pattern manipulation. Check out the examples above!
We introduce SplitWeave, a domain-specific language (DSL) for designing visual patterns, which plays a dual role in our approach. First, it allows parametric definitions of example pairs (A, A'), enabling users to guide edits on a target pattern pair (B, B') as though the underlying program for B were accessible. Second, SplitWeave facilitates the generation of synthetic training data by providing program samplers for creating patterns in common styles, such as tiling-based designs and intricate color field patterns. By applying identical edits to the SplitWeave programs for A and B, we generate quartets (A, A', B, B') where transformations between example patterns mirror those in target patterns. This synthetic dataset is used to train the editing model so that it learns to map (A, A', B) to B'.
@misc{ganeshan2024patterns,
title={Pattern Analogies: Learning to Perform Programmatic Image Edits by Analogy},
author={Aditya Ganeshan and Thibault Groueix and Paul Guerrero and Radomír Měch and Matthew Fisher and Daniel Ritchie},
year={2024},
eprint={2412.12463},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.12463},
}