Outpaint your way out

matrixpartners_gray
Prompt: “tileable image of garden lattice covered in climbing roses in the style of 1930s print ephemera,” generated using the outpainting feature in DALL·E 2

“I think what artists do, and what people who make culture do, is somehow produce simulators where new ideas like this can be explored. If you start to accept the idea of generative music, if you take home one of my not-available-in-the-foyer packs and play it at home, and you know that this is how this thing is made, you start to change your concept about how things can be organized. What you've done is moved into a new kind of metaphor. How things are made, and how they evolve. How they look after themselves.” – Brian Eno in “Generative Music,” a talk delivered in San Francisco in June 1996

With generative models and tools for imagination on my mind lately, I’ve been on the lookout for new metaphors. Outpainting—the aspect of generative image models that enables extrapolating from an initial image and extending the canvas outward—is my favorite metaphor yet.

As a feature, outpainting is useful in its own right. At a practical level, if you have an image that’s correct in content but wrong in aspect ratio, outpainting can solve that by “uncropping” the image and hallucinating what could plausibly have been just beyond the frame. At an imaginative level, outpainting for images is remarkably powerful for world-building. I use generative models often with my two young kids, and when my almost-four-year-old son asked for “a mean-looking wizard with red eyes in a haunted castle with green windows and slime,” it was thrilling to be able to use the outpainting feature in DALL·E 2 to extend the canvas again, and again, and again—revealing more mean-looking wizards as we went. (He laughed with nervous glee as we kept generating more panel: “WHY do those guys keep popping UP!” The fact that they seemed to “haunt” the infinite canvas only added to the effect.)

Yet as useful as outpainting is practically, I believe it’s just as useful as a metaphor for one dimension of what generative models make possible. In Brian Eno’s words, metaphors “change [our] concept about how things can be organized.” As a community of tool-builders, it would be nice to build a library of prompts for our own brains to nudge ourselves to notice when generative models could make a difference in what we’re building. Something like the Oblique Strategies card deck first created by Brian Eno and Peter Schmidt in 1975, but designed specifically to acclimate our minds to the potential of generative models. The Oblique Strategies deck includes prompts such as “Use an old idea” and “Honour thy error as a hidden intention.” In this new generative adaptation deck, we could imagine a prompt like “Outpaint your way out.”

As a metaphor, what outpainting captures is that often in creative work (which to my mind encompasses most knowledge work), what we want is to figure out the kernel—the gist of something, the direction—and then make more of it. How much more? Sometimes we don’t know until we hit a limit where things feel “done”; sometimes we’re working to a spec (the dimensions of an image, the length of a paper, the number of rows in a spreadsheet); sometimes we simply run up against a deadline or out of energy, and have to make do with however far we got. Outpainting—extrapolating from a kernel in a specified direction—can help in all of those scenarios. In particular, outpainting can help by adding a tireless level of detail and surprise, sometimes introducing gems that perk up the process just when the human creator’s imagination was running out of steam. In the image featured at the top of this post, the single rose on the right took my breath away. It was one option for the final panel, and I was just paging through, sleepwalking my way to trying to finish the image—when all of a sudden, beauty.

Outpaint your way out. Where do users get stuck in your tool, and where might generative models help them get unstuck? What is a “complete work” in your product, and can generative models help your users envision that end state faster? Where do your users need to create output in large quantity, and can generative models support them by synthesizing abundance from a directionally-correct kernel? What does it bring up for you?

With thanks to Zack for the lively discussion of outpainting as a metaphor, and Neil for helping me crystallize the thought by introducing another similar concept—generative models as a means of creating more interactions “kind of like dragging out a formula in Excel.” Also, thanks to the authors of this 2018 research paper—featuring one of the early instances of the term “outpainting”—for the delightful verb “hallucinate,” which I’ve borrowed here.