Audio-Reactive Typography
Music drives text width, line height, and font size in real time — text breathes with sound.
This demo connects the Web Audio API to Pretext's layout engine. Built-in audio sources (oscillator, noise, chord) generate frequency data that drives the text container width and line height. Bass frequencies control width, treble controls line height, and the text reflows every frame to match.
prepare()layout() What this demonstrates
Real-time layout driven by external signals. The audio frequency spectrum is mapped to layout parameters (width, line height), and Pretext relayouts every animation frame. This shows how cheap relayout enables responsive, data-driven typography.
Relevant Pretext API
prepare(text, font)— prepare once at startuplayout(prepared, width, lineHeight)— relayout every frame with audio-driven params
Audio architecture
Uses the Web Audio API with AnalyserNode for frequency data. No external audio
files needed — the demo generates its own sound via oscillators and noise buffers.
Exponential moving average smooths the audio values for fluid transitions.
Quick start
import { prepare, layout, buildFont } from '@chenglou/pretext';
// --- Audio setup (requires user gesture to start) ---
const audioCtx = new AudioContext();
const analyser = audioCtx.createAnalyser();
analyser.fftSize = 256; // 128 frequency bins
const freqData = new Uint8Array(analyser.frequencyBinCount);
const gainNode = audioCtx.createGain();
gainNode.gain.value = 0.3;
gainNode.connect(analyser);
analyser.connect(audioCtx.destination);
// Example source: sawtooth oscillator with LFO for pulsing
const osc = audioCtx.createOscillator();
osc.type = 'sawtooth';
osc.frequency.value = 80;
osc.connect(gainNode);
osc.start();
// --- THE core pattern: prepare() once, layout() every frame ---
const prepared = prepare(text, buildFont(16));
// Smoothed values (exponential moving average)
let smoothBass = 0, smoothTreble = 0;
const baseWidth = 500, baseLineHeight = 26;
const sensitivity = 1.0;
function tick() {
// Read frequency spectrum from audio
analyser.getByteFrequencyData(freqData);
const binCount = freqData.length;
const third = Math.floor(binCount / 3);
// Split spectrum into bass / mid / treble bands
let bass = 0, treble = 0;
for (let i = 0; i < third; i++) bass += freqData[i];
for (let i = third * 2; i < binCount; i++) treble += freqData[i];
bass /= (third * 255); // normalize to 0..1
treble /= ((binCount - third * 2) * 255);
// Smooth with EMA to avoid jitter (alpha=0.15)
const alpha = 0.15;
smoothBass = smoothBass * (1 - alpha) + bass * alpha;
smoothTreble = smoothTreble * (1 - alpha) + treble * alpha;
// Map audio energy to layout params
const sensScale = sensitivity * 200;
const width = baseWidth + smoothBass * sensScale; // bass widens
const lh = baseLineHeight + smoothTreble * sensScale * 0.5; // treble raises
// Re-layout every frame — prepare() was called ONCE above
const result = layout(prepared, width, Math.max(18, lh));
// result.lineCount and result.height update live
requestAnimationFrame(tick);
}
// NOTE: AudioContext requires a user gesture (click/tap) to start.
// Call audioCtx.resume() inside a click handler.