Two Lab-Grown Brains, One Game of Pong, Right in Your Browser
Last week I started playing with a set of files I’d been wanting to touch for a while: real spike recordings from human brain cultures, courtesy of Cortical Labs’ CL1 platform. The result is pinkysbrain.xyz — a browser game where two lab-grown brains compete at Pong, paddle by paddle, decoded from real neural spikes.
Pick two recordings. Tune what each brain “sees.” Hit play. Watch real biology argue over a bouncing dot.
What These Neurons Actually Are
The recordings come from a class of cells called induced pluripotent stem cells (iPSCs) — adult cells reprogrammed back to a stem-cell state, then differentiated into cortical neurons, the same cell type that makes up your cerebral cortex. The cells are cultured on a multi-electrode array (MEA): a chip with 64 electrodes arranged in an 8×8 grid, recording the cells’ electrical activity at 25,000 samples per second across all 64 channels.
Two flavors:
- Monolayer cultures — neurons grown as a flat sheet directly on the electrodes. Sparser firing, more spatially organized.
- Organoid cultures — 3D clusters of neurons that self-organize into layered structures resembling cortical tissue. Denser, more chaotic, more interesting to watch.
The platform ships with five recordings, each five minutes long. Together they contain about 240,000 spike events — every single one a real action potential from a real human cell.
Why Pong, of Course
The bridge here is Cortical Labs’ 2022 DishBrain paper. They cultured cortical neurons on a similar MEA, gave them sensory input via electrical stimulation that encoded a Pong ball’s position, rewarded “good” paddle actions with structured stimulation, and punished misses with a chaotic noise burst. The cultures learned. They got better at Pong. It was the first non-biological substrate to demonstrate goal-directed behaviour in a closed loop, and the paper made global headlines.
I wanted to play with the data myself. Not in a lab setting. In a browser. With a game I could ship on Vercel.
The Decoder
The hard part is mapping spikes to paddles. The MEA gives you time-stamped events: channel 17 fired at t = 2.3401s. You need to turn that into “the paddle should move up by this much.”
The version I shipped is intentionally simple, because the point was to make it playable, not to do state-of-the-art neural decoding:
- Split the 8×8 grid in half. Rows 0–3 are “up” voters, rows 4–7 are “down” voters.
- For each game tick (50ms), count spikes from each half.
- Compute
(downCount − upCount) / total— a number in[-1, 1]. - Smooth it with a 60/40 exponential moving average to kill jitter.
- Multiply by gain. Clamp to
[-1, 1]. That’s the paddle velocity.
Then I gave the player three knobs to tune:
- Spatial filter — restrict the decoder to a region of the grid (top, bottom, center, edges, etc.). Different filters are like giving the brain different sensory windows.
- Gain — how much the smoothed signal moves the paddle (0.5× to 3×).
- Time offset — which segment of the 5-minute recording to start replaying from.
Tune the knobs right and you can train one brain to wreck the other. Tune them symmetrically and you can watch two equally-well-tuned cultures stalemate forever.
The Punishment Model
This was the part I didn’t want to skip. In the DishBrain paper, the cultures’ learning emerges from the punishment signal — when a paddle misses the ball, the system fires a chaotic, unpredictable stimulation pattern at the cells, and the cells’ subsequent activity reorganises. The Free Energy Principle interpretation is that the cultures learn to minimise surprise by playing better and avoiding the chaotic punishment state.
I wanted my decoder to feel like that, even though I’m replaying recordings, not running closed-loop biology. So when a paddle misses the ball:
- The decoder is disrupted for one second
- The exponential smoothing state is overwritten with random noise
- The paddle jitters in a decaying random pattern
- After 1 second, normal decoding resumes
The visual effect is exactly what you’d hope: a brain that gets confused after a miss, recovers, and gets back to the game. It’s a metaphor, not a real implementation of the closed-loop physics, but it makes the game feel like it has stakes for the cultures.
How It Runs
The whole thing is client-side. No backend during gameplay.
- Raw HDF5 recordings (~500MB–1GB each, with full voltage traces) are processed offline by a Python script that extracts only the spike events into compact JSON files —
[timestamp, channel]pairs. The biggest one is 1.5MB. The smallest is 31KB. - The browser loads two JSON files (one per brain), runs the decoder loop at 20Hz in TypeScript, and renders both brains + Pong via Three.js at 60fps.
- Static site on Vercel. No database. No servers in the hot path.
The full stack: Next.js 16, React 19, Tailwind 4, Three.js, TypeScript, and a bit of Python for the offline pipeline. MIT-licensed, code on GitHub.
Why I Built This
There’s an aesthetic I keep chasing: rare data + a working artifact. Most people who care about Cortical Labs read the paper, look at the figures, and move on. The recordings sit in HDF5 files in research repositories, accessible but inert. I wanted to make them playable. To take 240,000 real spikes from real human cells and turn them into something a person can open in a browser tab and actually interact with.
The neuroscience interpretation is shallow on purpose. I’m not running closed-loop experiments, I’m not training a network to play optimally, I’m not contributing to the literature. I’m translating real biological data into a real game and letting the game be the experience. The “what’s happening here” is the entire point.
Try It
pinkysbrain.xyz — pick two brains, tune the controls, watch them play. The whole thing is ~3MB to load and runs in any modern browser. No login, no telemetry, no analytics.
If you want to read the code, github.com/MichaelLod/pinkysbrain — MIT licensed.
— Michael 🇦🇹