From Face to Metrics
Today, being a person isn't just about having a body. It's about having data. Every time your face is scanned, your heart rate tracked, or your clicks recorded, parts of you are turned into numbers. This process is called Datafication (Pillar 1): human traits become measurable data that can be stored, analysed, and compared.
From this data, a Data Double is created. This is a digital version of you, made up of biometric information, behaviours, and labels. When you pass through airport security or log into a banking app, the system isn't interacting with you as a person. It's interacting with your Data Double. That double isn't your personality or your soul. It's a collection of categories such as "low risk", "verified user", etc. These labels determine how systems treat you (Haggerty and Ericson 2000; Bouk 2018).
TouchDesigner Blob Tracking
I was drawn to this project featured on Derivative because of how it seamlessly merges technical precision with spatial sensitivity. The blob tracking system doesn't just detect movement. It transforms bodies into dynamic, responsive visual forms that feel alive. The human presence becomes the interface.
As the user move, different visual treatments ripple across the blob: distortion, color shifts, particle overlays, and reactive textures that make the tracked area feel computationally alive. I love how the numerical values appear alongside the visuals.
The data doesn't stay hidden, it surfaces in real time, exposing the system's inner logic. This is something I wanted my works to make visible. It transforms tracking from a purely technical process into something experiential, revealing how movement is constantly being translated into data.
Experimenting in TouchDesigner
Challenges Faced
Building this system in TouchDesigner was far more frustrating than I expected. Every time I reopened the file, the camera would stop communicating with MediaPipe, nodes would disappear, or connections would break. What worked perfectly one night would fail the next morning, probably a skill issue of mine. I rebuilt the network from scratch five separate times. I had to reconfigure the system, re-parse JSON, reconstruct the landmark tables, only for it to face the same instability again.
The inconsistency made it difficult to focus on refining the interaction itself, because I was constantly troubleshooting technical breakdowns instead of developing the concept. At this stage, I have no time to lose. I chose to stop fighting the software environment, and move the work to the web, where eveyrthing is more stable.
Prototype 6: Face Value
Prototype 6: Face Value confronts the first threshold of the exhibit: Biometric Data. It makes visible the technical process behind biometric capture. As participants stand before the camera, their faces are not interpreted as expressions, but decomposed into measurable landmarks such as eyes, nose, mouth. Each translated into coordinate points and bounding boxes. What appears on the screen is not simply a facial overlay. It is a computational breakdown of detection → landmark mapping → value extraction → categorisation.
This step-by-step reduction exposes datafication at work. The face becomes a dataset, parsed into numerical features that can be stored, compared, and classified. From this, a Data Double emerges (Haggerty and Ericson 2000): a version of the self composed of metrics rather than memory or intention.
By revealing the extraction process live, the prototype confronts viewers with the mechanics of algorithmic identity; where recognition begins as measurement, and personhood is reformatted into machine-readable code.
Read more about the algorithmic process breakdown in Prototype 6's Catalogue of Making.
Try Prototype 6 Catalogue of Making (Prototype 6)
I developed Face Value through an iterative dialogue with Gemini 3 in Google's AI Studio, where I translated my conceptual intent into functional instructions. I described how the screen should mirror the familiarity of Face ID where users see themselves framed and scanned. It is reframed as a critical illustration rather than a seamless authentication tool. Gemini helped generate the foundational files, which I refined to align with the exhibit's focus on biometric data. The system uses MediaPipe to detect and isolate facial landmarks such as left eye, right eye, nose, and mouth. It turns each landmark into measurable coordinates displayed live on screen.
To make the process explicit, I extended the functionality beyond basic landmark detection. The prototype calculates distances between features, between the eyes, from nose to eyes, nose to mouth. First in coordinates, and then converted into accurate centimetres to simulate bodily quantification. What users see in the bottom left console is the raw numerical trace of their face. They can export their capture as a full-frame image, or a cropped biometric box. Once stored, these entries accumulate in a shared database. The result is not just a scan, but a growing archive of data: faces abstracted into metrics, stored as comparable units rather than individuals.
Visual Archive
Challenges Faced
The biggest difficulty was making the regeneratePattern() rebuild the grid properly. ASCII characters behave unpredictably at different scales, so keeping the grid readable while still reacting dynamically to user input required repeated recalibration of spacing, density, and cell size.
Another issue was designing rules that feel “algorithmic” without becoming overly literal. Early versions produced patterns that looked random rather than meaningfully reduced, forcing me to refine classification logic (vowels, punctuation, length) so users could sense the system’s “reasoning.”
Reducing text without drifting into sentiment analysis or real semantic interpretation, meant constantly rethinking the rules to ensure the system behaves like a classifier, not an interpreter.