YaraEvery painting I make starts the same way — with a mess. A splash of paint, a smear of charcoal, a random gesture. The art is not in the first mark. The art is in what I do with it afterward. Generative models work the same way.
MUSEFrom chaos, structure. From noise, signal. From nothing, everything.
If you have ever used Perlin noise to generate terrain in a canvas game, or simplex noise for procedural textures, you already understand the core idea. Noise is not the enemy of creation — it is the raw material.
A generative model needs a source of variation. If you gave it the same input every time, it would produce the same output every time. Noise provides infinite variation — each random vector is a unique seed for a unique creation.
const val = noise.perlin2(x * 0.01, y * 0.01)const z = tf.randomNormal([1, 100])import * as tf from '@tensorflow/tfjs';
// Frontend: Perlin noise → terrain heightmap
// Each (x, y) coordinate gets a smooth random value
function generateTerrain(width: number, height: number) {
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d')!;
// noise.perlin2(x * scale, y * scale) → smooth height values
// Random input → structured output
}
// ML: Random vector → generated image
// A noise vector is like coordinates in "idea space"
const noiseVector = tf.randomNormal([1, 100]);
// Each of the 100 dimensions encodes some aspect of the output
// Different noise → different output
const noise1 = tf.randomNormal([1, 100]);
const noise2 = tf.randomNormal([1, 100]);
// noise1 and noise2 will produce different generated imagesRaw noise is formless. A neural network layer transforms it into something with structure — just like how a vertex shader transforms random mesh data into a recognizable shape.
import * as tf from '@tensorflow/tfjs';
// A single dense layer transforms noise into a pattern
const noiseInput = tf.randomNormal([1, 64]);
// Dense layer: 64 random values → 784 structured values (28x28 grid)
const denseLayer = tf.layers.dense({ units: 784, activation: 'sigmoid' });
const output = denseLayer.apply(noiseInput) as tf.Tensor;
const reshaped = output.reshape([28, 28]);
// Render to canvas
async function renderToCanvas(tensor: tf.Tensor2D, canvas: HTMLCanvasElement) {
const data = await tensor.array();
const ctx = canvas.getContext('2d')!;
const imageData = ctx.createImageData(28, 28);
for (let i = 0; i < 784; i++) {
const v = Math.floor(data[Math.floor(i / 28)][i % 28] * 255);
imageData.data[i * 4] = v;
imageData.data[i * 4 + 1] = v;
imageData.data[i * 4 + 2] = v;
imageData.data[i * 4 + 3] = 255;
}
ctx.putImageData(imageData, 0, 0);
}
// Without training: random-looking pattern
// After training: the layer learns to shape noise into meaningful imagesThe untrained layer produces visual static. Training teaches it which shapes, edges, and textures are meaningful — just as an artist learns which marks to keep and which to discard.
Create a function that takes a noise vector dimension and produces a 28x28 pattern by passing noise through a dense layer. Return the reshaped 2D array.
import * as tf from '@tensorflow/tfjs'; async function noiseToPattern(noiseDim: number): Promise<number[][]> { // 1. Create a random noise vector of shape [1, noiseDim] const noise = null; // your code here // 2. Create a dense layer with 784 units and sigmoid activation const layer = null; // your code here // 3. Pass noise through the layer and reshape to [28, 28] const output = null; // your code here return output; }
Samir builds a canvas sketch driven by noise vectors and sees the parallel — noise is the clay, the model is the sculptor.
Next: putting it all together to build your first generator