Narrative as Storytelling Tool
To create better storytelling, I approached narrative as a tool to organise and communicate how algorithmic systems construct identity. I need a speculative narrative framework to support the design of my artefacts, rather than as a fully fleshed-out fictional world.
The Grid
The premise of "The Grid" imagines a near-future society where we no longer interact
with digital
systems directly, but through algorithic identities generated from biometric and behavioural data.
This shift will translate the abstract processes into tangible interactions.
Rather than
building an extensive backstory, I focused on defining the key conditions of this world: how
data is captured, how it is processed, and how it is used to represent individuals. These
conditions informed the design of each artefact, ensuring that they function as components within a
system.
The Grid is a narrative framework that provides context. The
emphasis remains on how
users experience the construction of their algorithmic identity through designed interactions.
Issues
Narrative & Positioning
Jo was onboard with the narrative of The Grid. However, When I presented this new idea to Andreas, I think I didn't present it very well. It seemed like the story was complicated. I should have pitched upfront clearly: "I created a narrative + these prototypes for The Grid". I need to be confident my narrative communicates the concepts. It has to be design-focused, not about the future.
Purpose of Prototypes
Currently, there is a blurring problem. It is not clear whether the prototypes work as storytelling props or as design outcomes. Moving forward, I will define it clearly that "The prototypes are designed to _____, using The Grid narrative as a framework." The prototypes are tools that simulate the algorithmic identity construction. The narrative is the frame that makes it legible.
Designer Identity vs Concept
I am stuck between two directions. Either a concept-heavy or a design-docused direction. Based on my works, this entire project is to be a design-led speculative system. I need to be showing who I am as a designer, not overcomplicate into theory-heavy territory.
Story Over-Questioning
The Grid as a narrative sounded deep and vulnerable. It could open doors to people asking me questions about the theory. I need to curate the works to avoid these questions. The experience of the prototypes are to be intuitive, people are to feel, then reflect, not debate on the theory.
I wanted to revisit TouchDesigner and explore how I could build a system that detects, scans, and capture's a user's face directly from a live camera feed. The intention was to translate this process into something that feels immediate and machine-driven.
I struggled to create this as I was unfamiliar with TouchDesigner. However, Wen Soon the GOAT was kind enough to guide me through a simple tutorial which helped me understand the basic workflow and successfully created a working demo version of the system.
Prototyping Further
Honestly, I just have skill issues. I am not good with TouchDesigner. I was not able to make the
biometric work the way I want it in TouchDesigner. I had to resort back to web-based. In this
new prototype, I worked with Gemini to created a file biometrics.py as a
training script. It's job is to:
1) read a face-image dataset
2) extract labels from
the filenames
3) train a neural network
4) predict age, gender, and race from a face
image
5) save the trained model so it can later be used in the web
This script uses
UTKFace dataset where each image filename already contains labels such as the
age, gender, and race. The script reads those filenames, turns them into training labels, loads
the images, trains a CNN, and saves the result as a .h5 model.
In the code, there is a shared feature extractor that works as the main shared body of the
network. It looks for visual patterns in the face image, like edges, textures, shapes, and
facial structure. It turns the 2D feature maps into a 1D vector so that it can be fed into
output layers. It learns a shared facial representation that can be used for all 3 prediction
tasks. I used TensorFlow to train the network.
Though this script helped
to show how a system might take a face and convert it into machine-readable identity categories,
the script had many issues. I imported pandas as pd, but they are not used
anywhere. If an image fails to load, my cv2.resize() code would break.
This is a fairly basic CNN which works as a prototype, it is not very advanced compared
to modern face models. Predicting gender and race from faces is ethically and technically
problematic. They can be inaccurate or biased depending on the dataset.
Prototyping Further
I conducted a user test of this prototype with my grandmother and identified several issues, particularly in the system's accuracy. While she is 84 years old, the model estimated her age to be 38. This showed a significant discrepancy in how biometric data is interpreted and classified. This inaccuracy riased questions about the realiabilty of such systems, especially when applied across different age groups.
Despite this, I found value in the visual design of the output. The transformation of the face into pixel/square fragments, along with the use of ASCII characters mapped onto facial landmarks, effectively conveyed the idea of the human subject being translated into machine-readable data. This visual approach has potential for further exploration, particularly as a way to emphasise the abstraction and reduction of identity within algorithmic systems.
face-api.js
I researched further and found that I could use face-api.js to perform real-time biometric analysis directly in the browser, without needing a backend or external processing system. It provides pre-trained models for tasks like face detection, age estimation, and gender classification. Instead of training the model like the previous biometrics.py script, this helped me to integrate the capabilites into web. It runs using TensorFlow.js, continuously processing the webcam feed and update outputs frame by frame. It creates a live feedback loop where users see themselves being interpreted by the system as data.
Working with face-api.js showed that the classifications are more accurate that writing my own script. It was interesting to see how using face-api.js introduced intentional imperfection and limitation. As observed in the user testing, the system can be inaccurate (e.g., misestimating age). Rather than being a flaw, it was conceptually valuable. It showed how algorithmic systems can misread, oversimplify, or wrongly classify individuals.
Implementing Narrative
Using face-api.js, I tried implementing it into a website to situate it in the
narrative of "The Grid". Here, the system does three things:
1) Find a
face in the camera
2) Map key facial points (landmarks)
3) Estimate attributes (age +
gender)
I used the TinyFaceDetector model to scan the image, look for patterns that
resemble a face, and return a bounding box. There are 68 points on the face which returns an
array of points. To estimate age and gender, I used the
faceapi.nets.ageGenderNet which looks at skin texture, facial proportions,
learned statistical patterns to give the outputs.
To develop this prototype further, I
introduced a feature where clicking the register button captures and downloads the user's image,
presenting them as successfully registered citizen within The Grid, alongside their
system-estimated age and gender. While this extends the interaction, I felt that it was still
not communicating enough. At the surface level, it risks being read simply as a face scanner,
but the intention of the project is more critical.
Moving Forward
I need to be able to show how the system is meant to demonstrate how a person is reduced into machine-readable categories. The idea needs to be developed further to clearly establish this as the first-stage of the pipeline: before any behaviour or interaction takes place, the system defines who you are without asking or verifying, but simply by computing.