Week 10 / Task-Based Making

The Listening Post Ben Rubin and Mark Hanson

Task-Based Making

From Observation to Conceptual Development

This week's task-based making focused on translating observation into action. After refining my research through the RPO, I set out to create a structured plan that would move my project forward in a deliberate and methodical way. The ArtScience Museum field trip played a significant role in shaping this week’s direction. Experiencing the spatial, interactive, and sensory qualities of the exhibitions made me see more clearly how digital exhibition design can be used to make invisible systems tangible.

Google's Quick, Draw! stood out to me. Watching my sketches being instantly interpreted by an AI made the invisible computational process visible in a playful yet experimental way. I asked myself, if such drawings can show how machines interpret humans, could I reverse that process to expose how human experience is being interpreted by machines?

This line of questioning guided the creation for the task-based making experiment. Instead of showing how AI "reads" us, I want to invert the relationship and invite participants to share their own encounters with algorithmic systems. Stories shared becomes part of a collective, living archive. The aim is to transform private digital moments into a shared emotional space, where people can recognise patterns, empathise with one another, and critically reflect on how algorithms quietly shape everyday life.

Comparison of Task Experiment & Google's Quick, Draw!

Task Plan

To support this, I designed a clear plan outlining:

⑴ Aim & Purpose
• Make algorithmic experiences visible and shareable
• Turn private digital encounters into a collective emotional archive, evoke empathy and dialogue by allowing viewers to recognised shared vulnerabilities and reflect on how algorithmic systems shape everyday life
•  Display responses publicly on screen
•  Foundation for exploring how such a participatory archive could evolve into an exhibition design that is spatial, immersive, and socially reflective

⑵ Method
• Develop a web-based interface where participants can anonymously submit short written entries, images, or screenshots reflecting their encounters with algorithmic systems (e.g., targeted ads, recommendation loops, misclassifications)
• Aggregate all submissions into a generative visual display that continuously updates, forming a living archive of algorithmic experiences
• Observe how participants interact with and emotionally respond to the growing archive
• Sketch and illustrate how this system could expand into a physical exhibition space, envisioning screens, projections, or printed “walls of data” to translate the digital archive into a collective spatial experience

⑶ Plan
• Define data structure (text, image, reflection), and design low-fidelity wireframes for the upload and viewing interface
• Build working prototype for submissions and visual display using HTML, CSS, and JS
• Soft launch with a few participants to collect initial responses (observe tone, emotion, and engagement)
• Map out how this participatory archive could be adapted into an exhibition design (spatial sketches and interface visuals)


This experiment aims to connect data-driven visualisation with participatory storytelling, transforming individual encounters with algorithms into a shared public experience. By doing so, it frames the exhibition not just as a site of observation but as a space for emotional reflection and conversation.

DIA X NIKE Basketball Kinetic Typography

Andreas' Feedback

Modular Thinking Diagram

Andreas responded positively to my task plan, noting that the structure I created is clear, actionable, and sustainable across iterations. He acknowledges the ambition of the experiment but encouraged me to think modularly rather than trying to build one large, all-in-one system from the start. His advice reframed the project into five manageable modules:

⑴ Front-End Visuals
Work on the algorithmic experiences separately from the user input data collection

⑵ Front-End Data Collection
Develop an interface that collects user input

⑶ Back-End Archive
Collect submission backend

⑷ Full-Stack
Bring ⑴, ⑵, and ⑶ together

⑸ User Testing

⑴ Front-End Visuals: Text Response Box

Text Box High Fidelity Mockup Illustrator

How I visualise participants sharing their experience is after submitting their response on the form, the content will show up on a text box container for readability. Initially, I did not add the application tag, but after much consideration, I decided to add in the application tag to show which application this experience happened on for better categorisation. Once the response is submitted, it will fade in and appear on the homepage.

For future iterations, I plan to add in a function where participants can attach and upload an image to visually show their experience through e.g. screenshots. To increase engagement, I can consider making these boxes reactive, where participants can press and hold to react with an emoji, or comment. These small gestures would encourage participants to respond to one another’s stories, transforming the archive from a static collection of entries into a dynamic, socially responsive space where people can collectively process and contextualise their encounters with algorithmic systems.

⑵ Front-End Data Collection: Response Form

Response Form High Fidelity Mockup Illustrator

Since the idea is to make this a live wall within an exhibition space, in order for participants to share their experiences, there will be a QR code that participants can scan, which directs them to the response submission form on their own device. The form presents a clean, minimal interface containing the project title, a short description, and a text input field where participants can freely write about their algorithmic experiences. The layout is intentionally simple to reduce friction and make the act of sharing feel intuitive and safe.

At the bottom of the screen, two options are provided: a “Back to Archive” button that returns them to the collective display, and a “Submit” button that instantly sends their reflection into the growing archive. This flow ensures that participants can contribute seamlessly while maintaining a clear sense of where their input goes and how it becomes part of the shared experience.

⑶ Back-End Archive: Data Storage

In one of CiD's workshops, we learnt about Node.js. Since my experiment collects responses from participants and immediately displays them on the live wall, I needed a backend that is fast, lightweight, and easy to prototype. It allows the entire process to communicate seamlessly in real time.

To transform the experiment from a static visual prototype into a participatory system, I needed a way to collect, store, and update participant submissions in real life. This led me to design a simple back-end architecture made of two core components:

⑴ server.js (brain)
As the central controller of the entire experiment, this handles the logic of receiving new participant submissions, reading the entire existing archive, and sending data back to the front-end. It defines simple API endpoints such as readALL() and writeAll(), which keep the entire system modular and easy to debug.

⑵ responses.json (memory)
Inspired by my research into Google's Quick, Draw! dataset, JSON offers the format I need for this experiment. Functioning as the project's temporary database, every submission is stored here as a simple JSON array. Each entry is saved with an ID, content, and timestamp. It allows the archive to "remember" past inputs even after the server restarts.

Code Project Structure

Project Structure VS Code

A clear and intentional project structure is essential because this experiment operates across multiple layers. Without a modular architecture, the system would be difficult for me to maintain, debug, or scale. By separating the project into distinct folders and responsibilities, I can iterate them independently why ensuring all components still work seamlessly.

⑴ Front-End Visuals (public folder)
This layer contains everything participants see on the screen It is responsible for displaying, updating, and hosting the responses. Separting the visuals from the submission logic allows me to prototype the look, behaviour, and experience of the archive without touching the data processing layer.

⑵ Front-End Data collection (submission interface)
This participant-facing input tool manages the submission form, gathers text, and communicates the data to the server. Isolating the form allows me to redesign the interaction flow without affecting the archive display entries.

⑶ Back-End Archive (node.js server + responses.json)
As the memory of the system, it handles receiving, appending, saving, and broadcasting updates to front-end.

Demo

Full Stack Integration Demo Mockup

Challenges Faced

Reaching this stage was not easy. I went through six rounds of iteraction, each revealed new issues in readability, structure, interaction, and conceptual clarity.

⑴ Interface Issues (Iteration 1-2)
The first versions displayed submissions in a single expanding text box that grew to fill the entire screen. It was not the UI I wanted, it was overwhelming and visually unreadable especially when new submissions are added. It felt too flat and mechanical.

⑵ Readability Challenges (Iteration 3-4)
When more submissions were added, it overlapped visually and became difficult to read. There were opacity issues with the text display box that made it look washed out.

⑶ Issues with Title Placement (Iteration 5)
After the first few rounds of iteration, I realised that people might not understand what the text well is about if they just see it on the screen. Having a title would provide better context so that participants could immediately understand: What is this wall? Why are these messages here? What am I supposed to do? I tried several placements such as putting it in a corner but it looked too much like a web interface. Sticky header was visually distracting, and a floating label interfers with the text display boxes.

⑷ Poor Text "Character" (Iteration 6)
Another majoy challenge was the behaviour of these text display boxes. Initially they popped in abruptly, sat still, behaved very rigidly. I had to add in subtle transitions to let it fade in while appearing.

Task Feedback

After presenting this task experiment process and demo to Andreas, his feedback helped reframe the experiment not just as a single functioning prototype, but as a system made of interdependent layers. He emphasied the importance of distinguishing between the visual, technical, and system layers.

⑴ Visual Layer
Concerns what audiences see such as the aesthetic quality, text behaviour, motion, and emotional tone.

⑵ Technical Layer
Captures how the system runs, the form logic, form handling, real-time updates.

⑶ System Layer
Describes how everything interacts, how user input moves from the interface → to the backend → back to the visual display.

The current submission form also feels too plain. Mainly the front-end visuals are too flat and boring. I need to strengthen and work on the exhibition thinking, to articulate why this work is framed as an exhibition and what it achieves conceptually. It would be great if I could sketch out how participants will interact with the wall and what the space communicates emotionally and socially.

The Room of Change Giorgia Lupi 2019

The Room of Change Giorgia Lupi

The Room of Change is a 30-metre long “data tapestry” created for the 2019 edition of XXII Triennale di Milano, exhibition titled Broken Nature: Design Takes on Human Survival. The installation spans the walls of a gallery room. From left to right, it visualises change over time: from the past, through the present, and into a speculative future. It aggregates and layers multiple global datasets (e.g. world population, energy consumption, temperature, disease rates) alongside more local or human-scale stories (e.g. the decline of a specific lake, shifts in economy, demographic shifts in a country), giving both macro and micro views of change.

The piece illustrates the principle of “data humanism”: treating data not just as numbers but as stories, as lives, as collective memory. Rather than hiding complexity under charts, the tapestry embraces density and invites reflection. By covering the wall with layered information, it reveals how global systems and local lives are entangled.

Research Relevance

Like Lupi’s tapestry, I’m interested in how data turns experience into representation, how personal behaviours, preferences, histories can be abstracted into data, and how that representation can be visualised in a way that retains meaning and emotional weight. I’m also drawn to the scale-shift: from individual data to global patterns, from the granular to the systemic.

The Room of Change demonstrates how a wall-wide tapestry can hold centuries of change; similarly, my work experiments aim to encode identity traits and actions into visual, material form, making visible what is usually hidden. The immersive, sensory quality of the installation inspires me to think beyond charts and graphs, to treat data as living environments, where viewers/participants don’t just observe, but inhabit, explore, reflect.

The Room of Change Exhibition Space Sketch

Lupi's hand-drawn sketch of The Room of Change reveals more than just the layout of an installation. It exposes her entire way of thinking about how data can be transformed into an experiential, architectural narrative. The sketch breaks the exhibition down into time, themes, and layers, showing how the room wraps around the viewer as a continuous, evolving data story. Instead of treating data as static information, she designs it spatially: each coloured band becomes a timeline, each vertical slice a moment in history, and the entire room a living map of human and environmental change.

For my Dissertation Pillar 3: Digital Exhibition Design, this sketch becomes an instructional model. It demonstrates how a complex system can be translated into a digestible visual plan. The quick annotations show how she frames data through relationships, categories, rather than numbers. This aligns strongly with my goal to translate invisible algorithmic processes into experiences that can be seen, felt, and navigated. Her sketch shows that an exhibition about systems, is structured around:

• Clear spatial logic (what surrounds the user, what changes over time)
• Legible thematic bands (categories the audience can follow without explanation)
• A balance between micro and macro (zooming from individual stories to larger patterns)
• Embodied navigation (the visitor learns by walking, looking, comparing)