Experiment 7: ID://ASCII
How much of a person can a machine reconstruct from the smallest digital traces?
ID://ASCII examines how algorithmic identities are constructed from the smallest actions we make online. In this experiment, I set up a simple constraint: users must choose three ASCII characters, and those characters become the only vocabulary the system is allowed to use. This limitation mirrors the quiet but powerful ways algorithms construct our digital identities from small, seemingly insignificant choices of tiny inputs such as likes, follows, pauses, scrolls, become the foundations of how systems “see” us.
Once the characters are selected, the system performs a series of transformations that reveal how a living human presence is continuously reduced by computational logic:
⑴ Live Silhouette Capture: The camera extracts the user’s outline, turning a moving, expressive body into a static contour—the first step of algorithmic simplification.
⑵ Silhouette → ASCII Mosaic: The shape is rebuilt entirely using the three chosen characters, showing how personal actions become the limited raw material for algorithmic reconstruction.
⑶ ASCII Density Mapping: Character density shifts according to brightness, echoing how systems weight certain behaviours more heavily, amplifying some traces while diminishing others.
⑷ Side-by-Side Display: The live camera feed sits beside the ASCII version, making visible the gap between self-perception and algorithmic interpretation, a reminder that what the system “knows” is only ever a compressed approximation.
I made these transformations intentionally minimal and mechanical. I wanted them to reveal the reduction process rather than beautifying it, emphasising how algorithms flatten our individuality into legible formats. The work exposes how our algorithmic identities emerge not from who we are, but from the small traces we leave behind, and how easily those traces can become the whole identity.
Read more about the algorithmic process breakdown in Experiment 7's Catalogue of Making.
Try Experiment 7 Catalogue of Making (Experiment 7)Output
.png)
.png)
.png)
.png)
Process & Challenges
I had to navigate several significant challenges building this experiment. It required solving both conceptual and technical challenges, because the experience depends on synchronising three things at once: user choice, live camera input, and ASCII-based reconstruction.
Extracting the user's silhouette was the most challenging part of the experiment because it is the most complex part of the system. I had to work with raw pixel arrrays from a mirrored feed, make sure edges detect clearly under different lighting, and filter out noise while preserving the human outline. The edge detection was poor and I had to seek ChatGPT's help with using Sobel Edge Detection to convert the frame to greyscale, re-index pixels to account for the mirrored input, and apply Sobel gradient kernels. To ensure the ASCII appears clean, I had to thicken the outline by dilating the edges. This part required multiple iterations because too much thresholding erased the face entirely, while too little made the ASCII messy and noisy.
Once the system isolates the edges, it must translate them into ASCII characters. Every "pixel block" has to correspond cleanly to a region of the camera feed, and I had to map out character placement so the silhouette appears recognisable. Initial verions redrew characters randomly each frame which caused flicker, but fixing the grid solved this.
Prototype 4: Reconstructed Self
If you met your algorithmic double, would you recognise it?
At first glance, Reconstructed Self might look like a fun activity where you can look at yourself, draw yourself, and watch the system "draw" you back. But beneath that playful framing lies a critical examination of how identity is constructed across different systems.
I designed the experiment with three parallel views: the live camera feed, the user’s hand-drawn self-portrait, and the ASCII portrait generated by the system. Each one follows different rules of representation, embodied, expressive, and computational, mirroring how our gestures, clicks, and pauses online are constantly interpreted through multiple layers.
Once the user finishes drawing, the system transforms the camera feed into an ASCII identity, revealing how computational logic reduces what it sees:
⑴ Live Embodied Self:The left panel shows the user’s mirrored presence—a fluid, continuous reflection of the body.
⑵ Drawn Expressive Self: The middle panel captures how the user imagines themselves, full of subjectivity and intentional expression.
⑶ ASCII Algorithmic Self: The right panel reconstructs the user using only the three chosen ASCII characters, flattening the face into a symbolic, compressed data portrait.
These three views outline a clear progression from lived experience to computational abstraction. The experiment reveals how identity is never singular; it is constantly split, interpreted, and reassembled. What begins as a simple drawing activity ultimately exposes the hidden violence of reduction, how algorithms compress us into simplified silhouettes so we can be profiled, sorted, and fed back into digital systems that shape how we are seen.
Read more about the algorithmic process breakdown in Experiment 7's Catalogue of Making.
Try Prototype 4 Catalogue of Making (Prototype 4)Output
Process & Challenges
Building this experiment required designing three distinct visual systems: camera, drawing layer, and ASCII transformation, while ensuring they remained perfectly aligned, consistent, and responsive. The central challenge was synchronising three representations of the same identity using different forms of data: pixels, drawn strokes, and ASCII characters.
The first technical hurdle was constructing fixed 1:1 boxes that always maintained a real camera aspect ratio without distortion. This required manually calculating the aspect ratio, recomputing the layout on resize, and ensuring the camera was mirrored correctly using scale(-1,1). Any misalignment distorted the user’s reflection, making the left panel feel uncanny.
The second major challenge was unifying the drawing layer and the ASCII layer. Both had to be the
exact same resolution so that the ASCII output represented the drawing pixel-for-pixel. This
meant mapping mouse input from screen coordinates into drawing-layer coordinates precisely,
which revealed problems such as:
• incorrect mapping when resizing the window
• the drawn
strokes appearing offset
• pixel density mismatches causing incorrect ASCII
brightness
• ASCII text overflowing outside its box
These were solved by locking both
layers to a shared rendering size (640×480) and carefully mapping mouse input using map().
The ASCII conversion also gave me issues because it needed to update in real time, so that code
has to:
• compute brightness from sampled pixels
• map each value to a character from a
controlled ASCII ramp
• ensure each ASCII character sits inside its own grid
cell
• prevent jitter, stretching, or misalignment
Finally, balancing coverage vs. readability required tuning the number of columns and rows so the ASCII fully filled the panel but remained legible.
Moving Forward
For the next semester, I would like to expand my work beyond p5.js. Some considerables: TouchDesigner, Arduino, Processing, Three.js, and ML5.js. During the holidays, I plan to research more into the type of data that is being collected and what's not. With the experimentation over the holidays, I will keep in mind and tie it close to my three pillars. Ultimately, I will start planning for the exhibition design of this project.