Code to Video
As much as I wished I could avoid the technical aspects of the project, it was time to confront it head on. This week, I refined the code to ensure the stages flow seamlessly. I worked on improving the interactions and clarifying the connections between the layers, recognising the strength of the work depends not just on concept, but on how coherently the system is experienced. Alongside this, I also storyboarded how I want to film the video for Viva Voce, thinking through how to communicate the project clearly within a short, structured format.
I also had to consider that I would not always be present to guide users through the experience, which pushed me to rethink how the system could communicate on its own. To address this, I introduced simple instruction and guide modals to help users understand what each layer entails before interacting with it.
These prompts clarify what is being revealed at each stage, making the processes of body, behaviour, and identity more legible. I also developed transition screens with timed text, allowing users around ten seconds to read and process the information, ensuring the experience remains guided without my direct intervention.
Biometric / Display Page
I reconsidered the use of gestures like “wave hand to reveal face” and “open and close palm to
save image,” and realised they were unnecessary at this stage of the experience. Rather than
adding more interaction, they disrupted the flow and introduced friction.
I decided to
remove them and instead introduce a larger, centred “Next”
button to guide users forward more
intuitively. This shift also helps transition users into using the mouse for the next stage,
creating a smoother and more consistent interaction flow across the system.
Behavioural / Result Display
I decided to simplify the result display from the behavioural stage to make them more
immediate and legible for users. Instead of presenting
multiple data points such as top three categories, least interested category, and timing within
a single modal, I focused on displaying the most impactful insight which is their top category
of interest and average time spent through sequential text transitions.
Since the images
used to construct their algorithmic identity in the next layer depends largely on their top
category, it felt like the right choice to be explicit about it.
This
shift was
intentional, as the previous modal risked overwhelming users with too much information at once,
reducing clarity and engagement. By streaming the output, I aimed to prioritise understanding
over completeness, allowing users to more easily grasp how their interactions translate into
behavioural data.
Identity / Hand-tracking Interaction
Since one of my key intentions was to minimise, or even eliminate, the use of the mouse across
the prototype, I began to question the interaction design in the final stage. Requiring users to
keep one hand open while using the other to scroll with a mouse to zoom felt disjointed and
disruptive, breaking the embodied nature of the experience established earlier.
In
response, I reworked this layer to be fully hand-tracked, allowing users to
remain within a
consistent interaction mode. One hand is used to open and close, revealing or hiding the
identity, while the other performs a pinching gesture and moves horizontally to control
zoom.
This shift creates a more cohesive and immersive interaction, where the user’s body
remains the primary interface, reinforcing the project’s focus on how identity is constructed
through bodily input.
Storyboarding
For the storyboarding, I had to carefully consider the constraints of the Viva
Voce setup. While
I initially wanted to conduct a live demonstration by inviting examiners to interact with the
prototype themselves, I recognised that this would not always be reliable or feasible. The
prototype may not work as well as I expected and it might disrupt my entire presentation
flow.
As a result, I planned for a video to be displayed on the monitor, a screen
recording of a real user interacting with the system, to simulate the experience as closely as
possible. This video acts as a backup, ensuring that even when I am not physically present to
explain the work, the interaction and flow of the prototype are still clearly communicated.
Filming
I filmed the video in the studio against a white wall to achieve a clean and minimal visual outcome, keeping the focus on the interaction itself. I wanted the documentation to feel straightforward and uncluttered, without unnecessary distractions. Setting this up required some adjustment, as the pedestals available did not support mounting the monitor. To resolve this, I reconfigured the space and used a shelf as a stable base for the monitor instead, allowing me to maintain both the intended framing and overall aesthetic of the video.
Reconfiguration & Web Adaptation
After presenting the final prototype on a monitor during Open Studios and in my production
documentation, I began to question its longevity and
accessibility beyond the physical setup. This led me to adapt the work for a
web-based format, allowing audiences including you reading this, to engage with
the project directly without needing a portrait-oriented screen.
In
translating the experience to web scale, I had to critically reconsider aspects of the UI and
UX, making adjustments to text, layout, and the placement of interface elements. This process
revealed how context shapes interaction, requiring me to rethink not just the format, but how
the experience is communicated and navigated outside the controlled exhibit environment.
Jayanesh has been involved as a user since the early stages of the prototype, allowing him to
witness how the project has evolved over time. In the earlier version, interaction was
fragmented, each of the three stages existed on separate websites, and progression required
users to manually drag and drop JSON data between tabs for the system to read. This process,
while conceptually aligned with exposing data transfer, introduced friction that disrupted the
overall experience.
Observing his engagement with these iterations helped me recognise
how these technical mechanics, although intentional, were not as intuitive or accessible as I
had assumed.
In response to accumulated feedback, I consolidated the stages into a single, more seamless
system and removed the need for manual JSON handling. I then conducted informal user testing
again with Jayanesh, who responded positively to the revised version. He noted that the
experience felt significantly clearer and more fluid, particularly appreciating that there was
no longer a need to download or transfer data files.
However, he also pointed out an area
for improvement: the function of the bottom panel was not immediately obvious. He suggested that
I either verbally explain it during presentations or increase its visual prominence within the
interface, prompting me to reconsider how key interactive elements are communicated to users.
Prototype Website
With that, I present the final prototype. Try it out for yourself! I would recommend clicking the "Launch Prototype" button for the full experience.