Noise to AI
The Technical Pipeline
Abstract noise serves as a raw material for AI image generation, bridging digital randomness and physical fabrication. This project structures a recursive workflow that moves seamlessly between Grasshopper, ComfyUI, and a robotic arm. By understanding noise not just as a visual artifact, but as foundational structured data, we can guide diffusion models to translate generated patterns into highly specific forms.
The initial step involves creating structured noise in Grasshopper. Using custom plugin logic, we are able to construct complex, parametric black-and-white 3D noise maps that serve as the control layer for the AI model. These maps define the structural boundaries that the diffusion process will operate within.
Refining Noise
A significant portion of the process focuses on refining the scale and density of the noise maps. By experimenting with Grasshopper plugins, we generated a wide variety of black-and-white images. Comparing small-scale fine noise variations against larger-scale, smooth gradient flows revealed drastically different interpretations by the diffusion model downstream.
The Output Matrix
Once the noise maps were generated, they were fed into local diffusion models via ComfyUI. By manipulating latent space—specifically altering denoise values and text prompts such as "Lush Plants, Fine Line Black and Grey, Film Photo"—the raw noise is translated into distinct, structured images. The outputs vary wildly depending on the seed and denoise parameters, proving how noise dictates the compositional skeleton of the AI image generation.
Physical Output
The final phase closes the loop between the digital and the physical. The generated AI images were translated back into machine-readable toolpaths. A robotic arm, equipped with a specialized spraying tool, was programmed to reproduce these images on paper rolls by dynamically altering the density and spray duration of ink dots, recreating the complex noise-generated AI imagery in the physical world.