If you’re reading this on ellyseum.me, there’s a full-screen cosmic nebula behind these words. Procedural noise shaders, twinkling stars with diffraction spikes, the whole thing subtly responding to your mouse. All running at 60fps (or 240fps if your monitor’s into that). And there’s not a single line of Three.js in the entire project.

I get asked about this a lot, usually some variation of “what library did you use?” The answer is: none. It’s raw WebGL. A full-screen quad, a vertex shader, a fragment shader, and some uniforms. That’s it. That’s the whole trick!

This post is about how I built it, why I skipped the frameworks, and what I learned along the way. Spoiler: it was more fun than it had any right to be.


Why Not Three.js?

Three.js is great. I’ve used it professionally. But for a background effect, it’s absurd. You’re loading a scene graph, a renderer, cameras, geometry classes, material systems, and a hundred other abstractions you will never touch, just to draw a single rectangle and shade it. It’s like renting a U-Haul to deliver a sandwich.

What I actually needed:

  1. One full-screen quad (two triangles)
  2. Three uniforms: time, mouse position, resolution
  3. A fragment shader that does all the visual work

Three.js would’ve added ~150KB of JavaScript to do what I accomplished in about 120 lines of TypeScript. More importantly, I wanted to understand what was happening at every level. When your shader’s running slow, “the Three.js renderer is doing something” isn’t a useful diagnosis. “My FBM loop is doing 5 octaves at full resolution” is. One of these you can fix. The other one you can google frantically.


The Architecture

The whole system is simpler than people expect. Honestly, I was surprised too. Here’s the skeleton.

Canvas setup: A <canvas> element sits fixed behind everything with z-index: 0. The page content floats on top. Nothing fancy.

WebGL context: I request it with every optimization flag I can think of:

const contextOptions: WebGLContextAttributes = {
  alpha: false,        // No transparency needed, skip compositing
  antialias: false,    // Shader output is blurry nebula, AA is wasted work
  depth: false,        // 2D effect, no depth buffer
  stencil: false,      // Not doing stencil operations
  powerPreference: 'high-performance',
  preserveDrawingBuffer: false,
};

Every one of these flags matters! alpha: false alone can save you a compositing pass on some GPUs. preserveDrawingBuffer: false lets the browser discard the backbuffer between frames, which is free performance for something that redraws every frame anyway. I will take every free frame I can get, thank you.

The quad: Four vertices, two triangles via TRIANGLE_STRIP. This is the entire geometry of the scene:

const vertices = new Float32Array([-1, -1, 1, -1, -1, 1, 1, 1]);

That’s clip space coordinates. Bottom-left, bottom-right, top-left, top-right. The GPU stretches this across the entire viewport, and then the fragment shader takes over for every single pixel. Eight numbers. That’s my 3D model. 😄

Shader compilation: Compile vertex shader, compile fragment shader, link program, check for errors. It’s boilerplate, but it’s honest boilerplate. You know exactly what failed and where. No mystery meat.


The Cosmic Shader

The fragment shader is where the actual magic happens. The vertex shader is five lines that pass through UV coordinates:

attribute vec2 position;
varying vec2 vUv;
void main() {
  vUv = position * 0.5 + 0.5;
  gl_Position = vec4(position, 0.0, 1.0);
}

The * 0.5 + 0.5 converts from clip space (-1 to 1) to UV space (0 to 1). That’s the kind of thing you learn once and never think about again.

The fragment shader is where it gets interesting. The nebula effect comes from layered fractional Brownian motion (FBM) using 3D simplex noise. If you’ve never encountered FBM before: you take a noise function, sample it at increasing frequencies with decreasing amplitudes, and add the results together. Each layer adds finer detail. It’s how nature does it, and honestly nature has pretty good taste.

float fbm(vec3 p) {
  float v = 0.0, a = 0.5;
  for (int i = 0; i < 5; i++) {
    v += a * snoise(p);
    p *= 2.0;    // Double the frequency
    a *= 0.5;    // Halve the amplitude
  }
  return v;
}

Five octaves. Each pass doubles the frequency and halves the contribution. The result looks organic because that’s exactly how natural phenomena like clouds and smoke work at different scales. You get large sweeping shapes from the first octave, and the fine wispy detail from the later ones. It’s fractal all the way down, baby.

The nebula itself is three FBM layers with different colors, scales, and speeds:

// Purple nebula - large scale, slow drift
float n1 = fbm(vec3((uv + mouseOffset * 0.5) * 1.5, t)) * 0.5 + 0.5;
col += vec3(0.4, 0.08, 0.6) * pow(n1, 2.2) * 0.45;

// Teal nebula - medium scale, different phase
float n2 = fbm(vec3((uv + mouseOffset) * 2.0 + 5.0, t * 0.7)) * 0.5 + 0.5;
col += vec3(0.02, 0.28, 0.45) * pow(n2, 2.5) * 0.35;

// Pink wisps - fine detail, slowest animation
float n3 = fbm(vec3((uv + mouseOffset * 1.5) * 3.0 + 10.0, t * 0.5));
col += vec3(0.65, 0.12, 0.45) * smoothstep(0.1, 0.7, n3) * 0.2;

Notice how each layer samples at a different mouseOffset multiplier. The purple layer barely moves with the mouse, the teal moves more, the pink wisps move the most. This creates parallax. Your brain reads it as depth, even though it’s a flat 2D shader. The + 5.0 and + 10.0 offsets make sure each layer is sampling a completely different region of noise space so they don’t look like copies of each other.

The pow(n1, 2.2) is doing gamma-like contrast enhancement. It crushes the dark values and makes the bright areas pop. Without it, the nebula looks like a flat, washed-out fog. I did not spend hours on a cosmic background to get “fog.”


Stars That Actually Twinkle

I wanted stars that felt alive, not a static dot field. Each star gets its own twinkling rhythm based on a hash of its grid position:

float stars(vec2 uv, float density, float brightness) {
  vec2 gv = fract(uv * density) - 0.5;
  vec2 id = floor(uv * density);
  float h = hash(id);
  if (h > 0.95) {
    float d = length(gv);
    // Each star gets unique speed and phase
    float t1 = sin(uTime * (0.3 + h * 2.7) + h * 62.83);
    float t2 = sin(uTime * (0.5 + h2 * 2.5) + h2 * 47.12);
    float t3 = sin(uTime * (0.8 + h3 * 3.2) + h3 * 31.42);
    float flicker = 0.15 + 0.85 * ((t1*0.5 + t2*0.3 + t3*0.2) * 0.5 + 0.5);
    return smoothstep(0.06, 0.0, d) * brightness * flicker;
  }
  return 0.0;
}

The trick is three overlapping sine waves with unique frequencies and phases derived from the star’s hash. If you use a single sine wave, all the stars pulse rhythmically and it looks like a rave. Which, tempting, but no. Three waves with irrational-ish phase offsets (62.83, 47.12, 31.42; roughly multiples of pi and tau) create interference patterns that look genuinely random. Some stars are bright while others are dim, and the pattern never obviously repeats.

The brighter stars get diffraction spikes, like you’d see through a telescope:

// 4-point diffraction spikes
float spike1 = smoothstep(0.004, 0.0, abs(gv.x)) * smoothstep(0.08, 0.0, abs(gv.y));
float spike2 = smoothstep(0.004, 0.0, abs(gv.y)) * smoothstep(0.08, 0.0, abs(gv.x));

This is just two perpendicular lines with very narrow width (0.004) and short length (0.08), blended with smoothstep for soft falloff. Add diagonal spikes at 45 degrees by rotating the grid coordinates, and you get that classic four-point sparkle. It’s maybe ten lines of GLSL that makes the whole thing feel cinematic. ✨

Four density layers of stars (35, 70, 140, 220) at different parallax rates create the illusion of depth. Stars at mouseOffset * 0.3 barely move; stars at mouseOffset * 1.5 shift noticeably. Your lizard brain does the rest.


Mouse Interaction

The mouse position feeds into the shader as a uniform, normalized to 0-1:

updateMouse(x: number, y: number): void {
  this.targetMouse.x = x;
  this.targetMouse.y = y;
}

But it doesn’t snap directly. In the render loop, the actual mouse position lerps toward the target:

this.mouse.x += (this.targetMouse.x - this.mouse.x) * 0.05;
this.mouse.y += (this.targetMouse.y - this.mouse.y) * 0.05;

That 0.05 factor means the effect trails behind your cursor with a dreamy, floaty quality. It also means sudden mouse movements don’t cause jarring jumps in the nebula. The shader receives (uMouse - 0.5) * 0.04 as an offset, so even at maximum displacement the nebula only shifts by 2% of the screen. Subtle enough to feel alive, not so much that it’s distracting while you’re trying to read. I have priorities. Sometimes.

On mobile, touch events and device orientation both feed into the same system. Tilt your phone and the nebula responds! iOS 13+ requires explicit permission for gyroscope access, which I handle on first touch. Thanks, Apple.


Progressive Enhancement

Not every device can run five octaves of 3D simplex noise at 60fps. Some of us are browsing on a 2018 iPad, and I am not judging because that was me last month. So the site degrades gracefully across three tiers.

Full WebGL: Everything you see on a decent desktop. Cosmic background, constellation text effects using GPU instancing, flying icon particles, the target reticle that follows your mouse.

Potato mode: The FPS monitor (that little counter in the corner) watches your frame rate after a 3-second warmup. If you drop below 27fps for two consecutive samples, it kicks in. Resolution drops to 50% (the nebula is blurry by nature, so you genuinely can’t tell), constellation text and flying icons get destroyed entirely, and you’re left with just the background shader and basic navigation. If your FPS recovers, everything comes back. No shame in potato mode!! Your GPU is doing its best.

private enablePotatoMode(): void {
  this.constellation?.destroy();
  this.flyingIcons?.destroy();
  document.body.classList.add('potato-mode');
  this.background?.setResolutionScale(0.5);
}

Static CSS fallback: If prefers-reduced-motion: reduce is set, or WebGL isn’t available at all, the canvas gets display: none and a CSS gradient takes over:

.reduced-motion body::before {
  background:
    radial-gradient(ellipse 80% 50% at 50% 0%,
      rgba(139, 92, 246, 0.15), transparent 60%),
    radial-gradient(ellipse 60% 40% at 20% 80%,
      rgba(236, 72, 153, 0.1), transparent 50%),
    radial-gradient(ellipse 50% 50% at 80% 60%,
      rgba(6, 182, 212, 0.08), transparent 50%),
    var(--void);
}

Three radial gradients approximating the nebula colors against the dark background. It won’t win any awards, but it respects the user’s preferences and still looks intentional. The prefers-reduced-motion check happens in the entry point before any WebGL code even loads:

if (window.matchMedia('(prefers-reduced-motion: reduce)').matches) {
  document.body.classList.add('reduced-motion');
} else {
  new Experience();
}

No WebGL context created, no shaders compiled, no animation loop started. Zero wasted work. 💯


The Performance Tier System

The adaptive quality system goes deeper than just “potato mode on/off.” The FPS monitor detects your display’s refresh rate during a one-second calibration window and sets a target accordingly:

  • Super Ultra (480Hz): Yes, these monitors exist now. The math is the same.
  • Ultra (240Hz): High-refresh gaming monitors.
  • High (120Hz): Most modern phones, ProMotion iPads, gaming monitors.
  • Medium (60Hz): The default. The everyman. The reliable sedan of refresh rates.
  • Low (30Hz): Budget devices or old hardware.
  • Potato (below 27fps): Something is struggling. Strip it down.

The tier system checks whether you’re sustaining 90% of your target. If your 120Hz display is only hitting 108fps, you stay at High tier. Drop below that for two consecutive 500ms windows, and it adjusts. It’s reactive but not twitchy. Nobody likes twitchy.

Device capability detection supplements the FPS data: mobile user agents get capped at Medium, devices reporting less than 4GB of memory start at Low. Belt and suspenders.


What I Learned

Raw WebGL is verbose, but it’s not complicated. The API is a state machine: you set things up, you draw, you repeat. Once you’ve written the boilerplate once (context creation, shader compilation, buffer setup, uniform binding), the rest is just creative work in GLSL.

The fragment shader is where you’ll spend 90% of your time, and honestly? GLSL is fun. It’s the closest thing to pure math-as-art that programming offers. You type some trig and noise functions and shapes appear. You tweak a constant from 2.2 to 2.5 and the entire mood shifts. There’s an immediacy to it that you don’t get from most programming. It’s addictive. Don’t say I didn’t warn you.

The biggest lesson was about restraint. My first version had bloom, chromatic aberration, and a vignette post-processing pass. It looked like a tech demo. It looked like I was trying to impress someone at a demo scene competition in 2004. Stripping it back to just the nebula, the stars, and the mouse parallax made it feel like a place instead of a performance. The background should make you feel something without demanding your attention.

30,000 stars, five octaves of noise, running behind every page. Not bad for a rectangle and some math. 🚀