Seeing Yourself Seeing Yourself, a circular buffer for live video

DeepDream Vision Quest is an interactive art installation that applies DeepDream transformations to live video. This case study focuses on a utility feature I built for it, a circular image buffer for video input.

During long exhibition runs, I noticed that when no one was in front of the camera, the system would keep hallucinating on the last frame it captured, eventually reaching a static local maximum. The projection looked frozen until a new participant stepped into view. I needed a way to create an attract loop, something that would keep the system visually active and signal that it was still listening when no one interacted with it.

Part memory, part anticipation. Frippertronics for the living room

Part memory, part anticipation. Frippertronics for the living room

Part memory, part anticipation. Frippertronics for the living room

Feature Testing • Webcam feed on the left, looping memory playback in the middle, and the frame log on the right. Grinning because it worked just like I’d hoped • Creative Coding Gary Boodhoo

Sketching

Understanding the problem

The DeepDream Vision Quest installation code was written in Python 2.6, using OpenCV for live video capture and NumPy arrays as image buffers. I already knew how to grab frames from the webcam, store them, and transform them. The challenge was figuring out how to turn that memory into something that felt alive and continuous, not just a frozen snapshot of the last person who walked by. I sketched grids of frames to work out how new frames replaced old ones, how the buffer wrapped around, and how playback could start anywhere in the sequence.

From My Notebook • This is me working out the mental model of a rolling memory, thinking in terms of order, and continuity

Performing WIth Memories

Finding the right solution

I realized that to keep adding new frames to a fixed-size buffer, I had to drop the oldest ones in a rotating sequence, like a tape loop. At first, I imagined a playhead moving across the loop. Later, I realized the playhead was really a window into the natural rotation of the array, showing whatever index happened to be there. Once I figured out that the array and window could be offset independently, the looping behavior became expressive, using performative techniques:

  • Offsets — starting playback at different points so the loop felt less repetitive

  • Oscillators — using sine, sawtooth, or square waves to modulate playback start and stop points, creating a breathing, cyclical motion

  • Repeaters — controlling how many times a frame or set of frames would repeat before advancing, creating rhythmic or glitchy effects

From My Notebook • Apparently I realized oh its a circle. Looks like I'm also figuring out that the iterative path of the playhead can be modulated by a cyclic function

Computer science textbook diagram of my apparent "discovery"

Ready for Playing with

Integration

I wrote a dedicated Buffer class for the feature, utilizing a 4D array and numpy.roll() to shift frame order efficiently while continuing to write new frames in real time. This made using the feature with my installation's existing signal flow pretty easy, and gave the feature legs for future applications.

  • Early commits got the core frame storage and cycling working

  • Refinements eliminated glitches and kept playback continuous

  • Slow-shutter blending and frame averaging features added for ghost trails and softness

  • Cleanup and modularization for integration with the development branch of my DeepDream Vision Quest codebase, where I could begin exploring how to use it expressively.

Gitkraken • Seeing my work as a tree made it easy to think in branches, keep production clean, and integrate new ideas when they were ready.

Expressive Range

Part memory, part anticipation

Watching yourself on a short delay, especially on a large display (the bigger the better), creates a feedback loop that is part memory and part anticipation. A kind of Frippertronics for the living room.

Longer delays made the playback feel detached, like surveillance footage. Shorter delays were the most compelling, creating an out-of-body sensation and pulling people into the installation, exactly the behavior I was hoping for.

DeepDream Vision Quest Runtime • Using the Frame-Blending feature while buffering live video (I pointed my webcam at the TV)

DeepDream Vision Quest Runtime • Using the playhead cycling feature to show "memories" of a participant's movements

Outcome

Time-shifted video loops can transform idle states into memories. This feature turned downtime into part of the artwork, making the system feel alive and encouraging.

Gary Boodhoo

Creative Direction • Product Design | Contact gboodhoo at gmail | Site Design Gary Boodhoo ©2025 | Made in San Francisco

Let's Connect

Here's My Resume

Gary Boodhoo

Creative Direction • Product Design | Contact gboodhoo at gmail | Site Design Gary Boodhoo ©2025 | Made in San Francisco

Let's Connect

Here's My Resume

Gary Boodhoo

Creative Direction • Product Design | Contact gboodhoo at gmail | Site Design Gary Boodhoo ©2025 | Made in San Francisco

Let's Connect

Here's My Resume

Gary Boodhoo

Creative Direction • Product Design | Contact gboodhoo at gmail | Site Design Gary Boodhoo ©2025 | Made in San Francisco

Let's Connect

Here's My Resume