We present SeamlessGAN, a method capable of automatically generating tileable texture maps from a single input exemplar. In contrast to most existing methods, focused solely on solving the synthesis problem, our work tackles both problems, synthesis and tileability, simultaneously. Our key idea is to realize that tiling a latent space within a generative network trained using adversarial expansion techniques produces outputs with continuity at the seam intersection that can be then be turned into tileable images by cropping the central area. Since not every value of the latent space is valid to produce high-quality outputs, we leverage the discriminator as a perceptual error metric capable of identifying artifact-free textures during a sampling process. Further, in contrast to previous work on deep texture synthesis, our model is designed and optimized to work with multi-layered texture representations, enabling textures composed of multiple maps such as albedo, normals, etc. We extensively test our design choices for the network architecture, loss function and sampling parameters. We show qualitatively and quantitatively that our approach outperforms previous methods and works for textures of different types.





Teaser We present SeamlessGAN, a fully self-supervised generative model capable of generating multiple tileable texture stacks from a single input example. To achieve this, follow a two-step process. First, based on recent work on generative models for texture synthesis, we train a GAN using an adversarial expansion scheme, which learns to generate textures which double the spatial extent of their inputs. Our generator can output multiple spatially-coherent texture maps at the same time. The training objetive is a combination of pixel-wise, adversarial and perceptual losses. training_scheme

Related Projects


@article{rodriguezpardo2022SeamlessGAN, author = {Rodriguez-Pardo, Carlos and Garces, Elena}, title = {SeamlessGAN: Self-Supervised Synthesis of Tileable Texture Maps}, journal = {IEEE Transactions on Visualization and Computer Graphics}, year = {2022} }


Elena Garces was partially supported by a Torres Quevedo Fellowship (PTQ2018-009868).