Prototype 2: Encode Shape
What remains of our gestures once the system has finished interpreting us?
This prototype began with that question, where I learnt how our expressions are converted into features, coordinates, and labelled datasets. I wanted to expose that hidden pipeline and turn the invisible work of datafication into an experience users could actually witness.
In this experiment, I gave it rules where users are to draw five shapes, each with only five seconds, creating a fast-paced 25 second window of instinctive action. The time limit is intentional to mimic the everyday digital moments where we act sub-consciously, scrolling, tapping, pausing, while our micro-behaviours are continously captured.
After drawing, the system performs a sequence of reductions that reveal how expressive marks are stripped down by computational logic:
⑴ Replay of All Five Shapes: Sketches reappear in scattered positions, showing how expressive marks become reproducible behavioural traces.
⑵ Stroke → Pixels: Smooth lines break into pixel blocks, exposing the system’s first layer of structural interpretation.
⑶ Pixels → Dots: The expressive mark collapses further into sparse points, reduced to coordinates, not meaning.
⑷ Dots → Encoded Text: Finally, everything becomes symbolised and compressed into text-like code, mirroring how algorithms store us as data strings.
These transitions are deliberately abrupt rather than aesthetic. They dramatise the violence of reduction: the way organic, expressive human gestures are continuously flattened into abstractions so they can be categorised, sorted, and fed back into algorithmic systems.
Read more about the algorithmic process breakdown in Prototype 2's Catalogue of Making.
Try Prototype 2 Catalogue of Making (Prototype 2)Initial Stages Output

.png)
.png)
.png)
.png)
.png)
.png)
.png)
Reworking the Visual Logic of Reduction
In the early stages of this prototype, the system simply populated the entire screen with the user’s five drawings once all shapes were completed. Visually, it was engaging, but conceptually it fell short. It showed repetition, not reduction. Nothing about that sequence revealed the core idea I was trying to explore: how human gestures are progressively stripped of meaning as they pass through computational processes.
Returning to my readings on ASCII, symbolic encoding, and precedents like Rapoport’s Shoe-Field, I realised that I needed a more intentional visual structure: one that made the loss of information visible. This led to the four-stage reduction sequence I developed: Stroke → Pixels → Dots → Encoded Text.
⑴ Stroke represents the original, fully human gesture, the fluidity, speed, and nuance that only a person can create.
⑵ Pixels break that gesture into a grid, mimicking early digital capture technologies that compress curves into discrete units.
⑶ Dots further abstract the form, keeping only the most minimal structural anchors of the drawing—mirroring how systems often retain just “key features” instead of whole expressions.
⑷ Encoded Text is the final collapse of the gesture into pure symbolic representation. This choice came directly from my research into ASCII as a fundamental moment in computational reduction: the point where visual meaning had to be converted into alphanumeric codes to be machine-readable.
I chose to encode the shape path as text because it mirrors the way our actions are ultimately stored as strings, logs, and tokens. The drawing no longer exists as an image but as a trace, a sequence, a dataset. This textual endpoint reveals the core provocation of the prototype: what remains of our gestures once the system has finished interpreting us?
Final Output
Challenges Faced
⑴ Managing Multiple Phases in One System
I devoted many hours trying to
coordinate the four different states: countdown → draw → replay → reduction.
Each phase required its own timing logic, UI behaviour, and rendering rules. I went through
rounds of iterations to debug and ensure that the transitions didn't break.
⑵ Capturing User Strokes Accurately
The strokes were not drawing accurately,
it was laggy and off-alignment. I had to seek ChatGPT's help to ensure that the
alignment is accurate. They overlay canvas has to store every point with timestamps, and avoid
"breaks" in the line when users drew too quickly or too slowly.
⑶ Replaying All Shapes Simultaneously
Initially, after users complete their
drawings, it will just appear on the screen. However, after much though, I added the stroke
replay function to make the "raw human input" visible before the system intervenes. Replaying
them in sync required mapping thousands of timestamped points into a fixed 2-second animation
window. It was challenging to ensure the shapes didn't "jump" or distort during replay.